
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend. For some reason the vdsmd on the nodes is logging an error every few seconds: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint? Greetings Christoph

Can you share engine and VDSM logs please? TIA, Y. On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Other than errors, anything that doesn't work on your environment? On Jul 21, 2016 09:48, "Yaniv Kaul" <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

as far as i can tell, everything is fine. VMs are running, i can migrate them, they are restarted on crash, etc. No events in the dashboard, performance data is available. Greetings Christoph On Thu, Jul 21, 2016 at 8:59 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Other than errors, anything that doesn't work on your environment? On Jul 21, 2016 09:48, "Yaniv Kaul" <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_6af1dd4ac72f4d1cad9b0c3edda672b9EX10grfcrf_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Q2hlY2sgdGltZS9udHAgc2V0dGluZ3Mgb24gbm9kZXMgYW5kIGVuZ2luZS4NCg0KLS0NCmEuc2h2 eWdpbkBncmZjLnJ1DQoNCkZyb206IHVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnIFttYWlsdG86dXNl cnMtYm91bmNlc0BvdmlydC5vcmddIE9uIEJlaGFsZiBPZiBDLiBIYW5kZWwNClNlbnQ6IFRodXJz ZGF5LCBKdWx5IDIxLCAyMDE2IDEwOjAzIEFNDQpUbzogT3ZlZCBPdXJmYWxpDQpDYzogdXNlcnMN ClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIHZkc20gc3NsIGVycm9ycw0KDQphcyBmYXIgYXMg aSBjYW4gdGVsbCwgZXZlcnl0aGluZyBpcyBmaW5lLg0KDQpWTXMgYXJlIHJ1bm5pbmcsIGkgY2Fu IG1pZ3JhdGUgdGhlbSwgdGhleSBhcmUgcmVzdGFydGVkIG9uIGNyYXNoLCBldGMuDQoNCk5vIGV2 ZW50cyBpbiB0aGUgZGFzaGJvYXJkLCBwZXJmb3JtYW5jZSBkYXRhIGlzIGF2YWlsYWJsZS4NCg0K R3JlZXRpbmdzDQogICBDaHJpc3RvcGgNCg0KT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgODo1OSBB TSwgT3ZlZCBPdXJmYWxpIDxvb3VyZmFsaUByZWRoYXQuY29tPG1haWx0bzpvb3VyZmFsaUByZWRo YXQuY29tPj4gd3JvdGU6DQoNCk90aGVyIHRoYW4gZXJyb3JzLCBhbnl0aGluZyB0aGF0IGRvZXNu J3Qgd29yayBvbiB5b3VyIGVudmlyb25tZW50Pw0KT24gSnVsIDIxLCAyMDE2IDA5OjQ4LCAiWWFu aXYgS2F1bCIgPHlrYXVsQHJlZGhhdC5jb208bWFpbHRvOnlrYXVsQHJlZGhhdC5jb20+PiB3cm90 ZToNCkNhbiB5b3Ugc2hhcmUgZW5naW5lIGFuZCBWRFNNIGxvZ3MgcGxlYXNlPw0KVElBLA0KWS4N Cg0KT24gVGh1LCBKdWwgMjEsIDIwMTYgYXQgOToyMiBBTSwgQy4gSGFuZGVsIDxvdmlydEBtYWNo dC1ibGF1Lm9yZzxtYWlsdG86b3ZpcnRAbWFjaHQtYmxhdS5vcmc+PiB3cm90ZToNCmkgaGF2ZSBh biBvdmlydCA0LjAuMSBpbnN0YWxsYXRpb24gb24gdHdvIG5vZGVzLiBIb3N0ZWQgRW5naW5lLiBT YW4gc3RvcmFnZSBiYWNrZW5kLg0KDQpGb3Igc29tZSByZWFzb24gdGhlIHZkc21kIG9uIHRoZSBu b2RlcyBpcyBsb2dnaW5nIGFuIGVycm9yIGV2ZXJ5IGZldyBzZWNvbmRzOg0KDQoNCnZkc20gdmRz LmRpc3BhdGNoZXIgRVJST1IgU1NMIGVycm9yIGR1cmluZyByZWFkaW5nIGRhdGE6IHVuZXhwZWN0 ZWQgZW9mDQoNCg0KUnVubmluZyB0Y3BkdW1wIGl0IGlzIGEgY29ubmVjdCBmcm9tIHRoZSBub2Rl IHRvIGl0c2VsZi4gSSBjYW4ndCBmaWd1cmUgb3V0IHdoYXQgaXMgd3JvbmcuIENhbiBzb21lb25l IGdpdmUgbWUgYSBoaW50Pw0KDQpHcmVldGluZ3MNCiAgIENocmlzdG9waA0KDQpfX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KVXNlcnMgbWFpbGluZyBsaXN0 DQpVc2Vyc0BvdmlydC5vcmc8bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZz4NCmh0dHA6Ly9saXN0cy5v dmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KDQoNCl9fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fDQpVc2VycyBtYWlsaW5nIGxpc3QNClVzZXJzQG92 aXJ0Lm9yZzxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9t YWlsbWFuL2xpc3RpbmZvL3VzZXJzDQoNCg== --_000_6af1dd4ac72f4d1cad9b0c3edda672b9EX10grfcrf_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTQgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6 Q2FsaWJyaTsNCglwYW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9DQpAZm9udC1mYWNlDQoJ e2ZvbnQtZmFtaWx5OlRhaG9tYTsNCglwYW5vc2UtMToyIDExIDYgNCAzIDUgNCA0IDIgNDt9DQov KiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1z b05vcm1hbA0KCXttYXJnaW46MGNtOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250LXNp emU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLCJzZXJpZiI7fQ0KYTps aW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6 Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCmE6dmlzaXRlZCwgc3Bhbi5Nc29I eXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6cHVycGxl Ow0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KcA0KCXttc28tc3R5bGUtcHJpb3JpdHk6 OTk7DQoJbXNvLW1hcmdpbi10b3AtYWx0OmF1dG87DQoJbWFyZ2luLXJpZ2h0OjBjbTsNCgltc28t bWFyZ2luLWJvdHRvbS1hbHQ6YXV0bzsNCgltYXJnaW4tbGVmdDowY207DQoJZm9udC1zaXplOjEy LjBwdDsNCglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFuIiwic2VyaWYiO30NCnNwYW4uRW1h aWxTdHlsZTE4DQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLXJlcGx5Ow0KCWZvbnQtZmFtaWx5 OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7DQoJY29sb3I6IzFGNDk3RDt9DQouTXNvQ2hwRGVmYXVs dA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglmb250LWZhbWlseToiQ2FsaWJyaSIs InNhbnMtc2VyaWYiOw0KCW1zby1mYXJlYXN0LWxhbmd1YWdlOkVOLVVTO30NCkBwYWdlIFdvcmRT ZWN0aW9uMQ0KCXtzaXplOjYxMi4wcHQgNzkyLjBwdDsNCgltYXJnaW46Mi4wY20gNDIuNXB0IDIu MGNtIDMuMGNtO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+ PC9zdHlsZT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4NCjxvOnNoYXBlZGVmYXVsdHMgdjpleHQ9 ImVkaXQiIHNwaWRtYXg9IjEwMjYiIC8+DQo8L3htbD48IVtlbmRpZl0tLT48IS0tW2lmIGd0ZSBt c28gOV0+PHhtbD4NCjxvOnNoYXBlbGF5b3V0IHY6ZXh0PSJlZGl0Ij4NCjxvOmlkbWFwIHY6ZXh0 PSJlZGl0IiBkYXRhPSIxIiAvPg0KPC9vOnNoYXBlbGF5b3V0PjwveG1sPjwhW2VuZGlmXS0tPg0K PC9oZWFkPg0KPGJvZHkgbGFuZz0iUlUiIGxpbms9ImJsdWUiIHZsaW5rPSJwdXJwbGUiPg0KPGRp diBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9 IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+Q2hlY2sgdGltZS9u dHAgc2V0dGluZ3Mgb24gbm9kZXMgYW5kIGVuZ2luZS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZTox MS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1 b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMDAyMDYwIj4t LTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9 IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzAwMjA2MCI+YTwvc3Bhbj48c3Bh biBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7 LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzAwMjA2MCI+Ljwvc3Bhbj48c3BhbiBsYW5n PSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy aSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMwMDIwNjAiPnNodnlnaW48L3Nw YW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy aSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMwMDIwNjAiPkA8L3NwYW4+PHNw YW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90 O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMDAyMDYwIj5ncmZj PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0Nh bGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMDAyMDYwIj4uPC9zcGFu PjxzcGFuIGxhbmc9IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTom cXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzAwMjA2MCI+ cnU8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7 Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMwMDIwNjAiPjxvOnA+ PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250 LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1z ZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj48Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250 LWZhbWlseTomcXVvdDtUYWhvbWEmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+RnJvbTo8 L3NwYW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90 O1RhaG9tYSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4gdXNlcnMtYm91bmNlc0Bvdmly dC5vcmcgW21haWx0bzp1c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZ10NCjxiPk9uIEJlaGFsZiBPZiA8 L2I+Qy4gSGFuZGVsPGJyPg0KPGI+U2VudDo8L2I+IFRodXJzZGF5LCBKdWx5IDIxLCAyMDE2IDEw OjAzIEFNPGJyPg0KPGI+VG86PC9iPiBPdmVkIE91cmZhbGk8YnI+DQo8Yj5DYzo8L2I+IHVzZXJz PGJyPg0KPGI+U3ViamVjdDo8L2I+IFJlOiBbb3ZpcnQtdXNlcnNdIHZkc20gc3NsIGVycm9yczxv OnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9v OnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPmFzIGZhciBhcyBpIGNhbiB0ZWxs LCBldmVyeXRoaW5nIGlzIGZpbmUuJm5ic3A7PG86cD48L286cD48L3A+DQo8ZGl2Pg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj5WTXMgYXJlIHJ1bm5pbmcsIGkgY2FuIG1pZ3JhdGUgdGhlbSwgdGhl eSBhcmUgcmVzdGFydGVkIG9uIGNyYXNoLCBldGMuJm5ic3A7PG86cD48L286cD48L3A+DQo8L2Rp dj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwv ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPk5vIGV2ZW50cyBpbiB0aGUgZGFzaGJv YXJkLCBwZXJmb3JtYW5jZSBkYXRhIGlzIGF2YWlsYWJsZS4mbmJzcDs8bzpwPjwvbzpwPjwvcD4N CjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9w Pg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+R3JlZXRpbmdzPG86cD48L286 cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDsgJm5ic3A7 Q2hyaXN0b3BoPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNz PSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29O b3JtYWwiPk9uIFRodSwgSnVsIDIxLCAyMDE2IGF0IDg6NTkgQU0sIE92ZWQgT3VyZmFsaSAmbHQ7 PGEgaHJlZj0ibWFpbHRvOm9vdXJmYWxpQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5vb3Vy ZmFsaUByZWRoYXQuY29tPC9hPiZndDsgd3JvdGU6PG86cD48L286cD48L3A+DQo8cD5PdGhlciB0 aGFuIGVycm9ycywgYW55dGhpbmcgdGhhdCBkb2Vzbid0IHdvcmsgb24geW91ciBlbnZpcm9ubWVu dD88bzpwPjwvbzpwPjwvcD4NCjxkaXY+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3Jt YWwiPk9uIEp1bCAyMSwgMjAxNiAwOTo0OCwgJnF1b3Q7WWFuaXYgS2F1bCZxdW90OyAmbHQ7PGEg aHJlZj0ibWFpbHRvOnlrYXVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj55a2F1bEByZWRo YXQuY29tPC9hPiZndDsgd3JvdGU6PG86cD48L286cD48L3A+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+Q2FuIHlvdSBzaGFyZSBlbmdpbmUgYW5kIFZEU00gbG9ncyBwbGVhc2U/PG86cD48 L286cD48L3A+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+VElBLDxvOnA+PC9vOnA+PC9w Pg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+WS48bzpwPjwvbzpwPjwvcD4N CjwvZGl2Pg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8 L286cD48L3A+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+T24gVGh1LCBKdWwgMjEsIDIw MTYgYXQgOToyMiBBTSwgQy4gSGFuZGVsICZsdDs8YSBocmVmPSJtYWlsdG86b3ZpcnRAbWFjaHQt YmxhdS5vcmciIHRhcmdldD0iX2JsYW5rIj5vdmlydEBtYWNodC1ibGF1Lm9yZzwvYT4mZ3Q7IHdy b3RlOjxvOnA+PC9vOnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPmkgaGF2ZSBh biBvdmlydCA0LjAuMSBpbnN0YWxsYXRpb24gb24gdHdvIG5vZGVzLiBIb3N0ZWQgRW5naW5lLiBT YW4gc3RvcmFnZSBiYWNrZW5kLjxvOnA+PC9vOnA+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29O b3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+Rm9yIHNvbWUgcmVhc29uIHRoZSB2ZHNtZCBvbiB0aGUgbm9kZXMgaXMgbG9nZ2lu ZyBhbiBlcnJvciBldmVyeSBmZXcgc2Vjb25kczo8bzpwPjwvbzpwPjwvcD4NCjxkaXY+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0K PHAgY2xhc3M9Ik1zb05vcm1hbCI+dmRzbSB2ZHMuZGlzcGF0Y2hlciBFUlJPUiBTU0wgZXJyb3Ig ZHVyaW5nIHJlYWRpbmcgZGF0YTogdW5leHBlY3RlZCBlb2Y8bzpwPjwvbzpwPjwvcD4NCjwvZGl2 Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9k aXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8 L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5SdW5uaW5nIHRjcGR1bXAgaXQgaXMg YSBjb25uZWN0IGZyb20gdGhlIG5vZGUgdG8gaXRzZWxmLiBJIGNhbid0IGZpZ3VyZSBvdXQgd2hh dCBpcyB3cm9uZy4gQ2FuIHNvbWVvbmUgZ2l2ZSBtZSBhIGhpbnQ/PG86cD48L286cD48L3A+DQo8 L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4N CjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkdyZWV0aW5nczxvOnA+PC9vOnA+ PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7ICZuYnNwO0No cmlzdG9waDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPHAgY2xhc3M9 Ik1zb05vcm1hbCIgc3R5bGU9Im1hcmdpbi1ib3R0b206MTIuMHB0Ij48YnI+DQpfX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NClVzZXJzIG1haWxpbmcg bGlzdDxicj4NCjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciIHRhcmdldD0iX2JsYW5r Ij5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyPg0KPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9y Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92 aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+ DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2Pg0KPHAg Y2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9Im1hcmdpbi1ib3R0b206MTIuMHB0Ij48YnI+DQpfX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NClVzZXJzIG1h aWxpbmcgbGlzdDxicj4NCjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciIHRhcmdldD0i X2JsYW5rIj5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyPg0KPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92 aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xp c3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxvOnA+PC9vOnA+PC9wPg0K PC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpw PiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo= --_000_6af1dd4ac72f4d1cad9b0c3edda672b9EX10grfcrf_--

all nodes and engines are synced to the same timeserver. as far as i can tell, the connection is from the node itself, to the vdsm on the node, so no time difference can occur. the certificate that get's presented be vdsm on port 54321 looks ok. For what it is worth, I changed the certificate of the hosted-engine web interface by reconfiguring apaches mod_ssl, not touching any certificate in hosted-engine:/etc/pki/ovirt-* On Thu, Jul 21, 2016 at 9:22 AM, Швыгин Алексей Михайлович < a.shvygin@grfc.ru> wrote:
Check time/ntp settings on nodes and engine.
--
a.shvygin@grfc.ru
*From:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *C. Handel *Sent:* Thursday, July 21, 2016 10:03 AM *To:* Oved Ourfali *Cc:* users *Subject:* Re: [ovirt-users] vdsm ssl errors
as far as i can tell, everything is fine.
VMs are running, i can migrate them, they are restarted on crash, etc.
No events in the dashboard, performance data is available.
Greetings
Christoph
On Thu, Jul 21, 2016 at 8:59 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Other than errors, anything that doesn't work on your environment?
On Jul 21, 2016 09:48, "Yaniv Kaul" <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please?
TIA,
Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings
Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Based on the pieces of the logs that you provided it is hard to tell what was the reason of a connection being closed. This message is logged when python ssl library we use was reading the data from the connection and it was closed by the other side (engine). It is expected message in those situations. Please provide more complete engine logs so we know why the connection was closed. On Thu, Jul 21, 2016 at 9:31 AM, C. Handel <ovirt@macht-blau.org> wrote:
all nodes and engines are synced to the same timeserver.
as far as i can tell, the connection is from the node itself, to the vdsm on the node, so no time difference can occur.
the certificate that get's presented be vdsm on port 54321 looks ok.
For what it is worth, I changed the certificate of the hosted-engine web interface by reconfiguring apaches mod_ssl, not touching any certificate in hosted-engine:/etc/pki/ovirt-*
On Thu, Jul 21, 2016 at 9:22 AM, Швыгин Алексей Михайлович <a.shvygin@grfc.ru> wrote:
Check time/ntp settings on nodes and engine.
--
a.shvygin@grfc.ru
From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of C. Handel Sent: Thursday, July 21, 2016 10:03 AM To: Oved Ourfali Cc: users Subject: Re: [ovirt-users] vdsm ssl errors
as far as i can tell, everything is fine.
VMs are running, i can migrate them, they are restarted on crash, etc.
No events in the dashboard, performance data is available.
Greetings
Christoph
On Thu, Jul 21, 2016 at 8:59 AM, Oved Ourfali <oourfali@redhat.com> wrote:
Other than errors, anything that doesn't work on your environment?
On Jul 21, 2016 09:48, "Yaniv Kaul" <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please?
TIA,
Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings
Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

longer logs attached, excerpts: ---+ vdsm Reactor thread::INFO::2016-07-21 08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21 08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21 08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21 08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ... JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof ---+ engine 2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' Greetings Christoph On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21 08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21 08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21 08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21 08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address. Should be improved in https://gerrit.ovirt.org/61303 Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

patch applied, The connection is from the node itself. logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached. Greetings Christoph On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21 08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read)
Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket)
Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org>
wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 4:45 PM, C. Handel <ovirt@macht-blau.org> wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Thanks for testing. Is this hosted engine setup? Hosted engine ha agent is using jsonrpc, and it creates a new connection for every request, maybe what we see is the side effect of closing the old connection uncleanly. I guess that logging all tcp connections to localhost:54321 (netstat? lsof?) and the applications using them, and you will find the process causing the unexpected eof. Nir
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21
08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Christoph, In log snippets you provided I can see 2 occurrences of the log entry. There is 3 seconds between the calls. Each time I see calls to Host.getStats and Host.getHardwareInfo both from x.x.138.208. I do not see any log entries in the engine log so it not engine who connected. What host is it? Thanks, Piotr On Mon, Jul 25, 2016 at 3:45 PM, C. Handel <ovirt@macht-blau.org> wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21
08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

two nodes (x.x.138.208, x.x.138.210), hosted-engine on x.x.139.240. the vdsm logs are from x.x.138.208 and the connection is from the node itself.
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone ?> give me a hint?
so i know they are coming from the node itself. The process connecting is terminating too fast. the moment i see it in tcpdump, it is gone from the process table. Greetings Christoph On Mon, Jul 25, 2016 at 4:53 PM, Piotr Kliczewski < piotr.kliczewski@gmail.com> wrote:
Christoph,
In log snippets you provided I can see 2 occurrences of the log entry. There is 3 seconds between the calls.
Each time I see calls to Host.getStats and Host.getHardwareInfo both from x.x.138.208. I do not see any log entries in the engine log so it not engine who connected.
What host is it?
Thanks, Piotr
On Mon, Jul 25, 2016 at 3:45 PM, C. Handel <ovirt@macht-blau.org> wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org>
wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21
08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__)
Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read)
Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket)
Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote:
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage backend.
For some reason the vdsmd on the nodes is logging an error every few seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone give me a hint?
Greetings Christoph
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 6:22 PM, C. Handel <ovirt@macht-blau.org> wrote:
two nodes (x.x.138.208, x.x.138.210), hosted-engine on x.x.139.240.
the vdsm logs are from x.x.138.208 and the connection is from the node itself.
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone ?> give me a hint?
so i know they are coming from the node itself. The process connecting is terminating too fast. the moment i see it in tcpdump, it is gone from the process table.
I think this is ovirt hosted agent - check its logs, you will probably find that it make some request in the same time you see the errors in your logs. Adding Martin, maintaining this project. These are the interesting events in the logs: $ grep 37678 vdsm.log JsonRpc (StompReactor)::ERROR::2016-07-25 13:48:58,074::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37678, 0, 0) at 0x42c9b90>: unexpected eof $ grep 37684 vdsm.log Reactor thread::INFO::2016-07-25 13:49:00,205::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:37684 Reactor thread::INFO::2016-07-25 13:49:00,211::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:37684 Reactor thread::DEBUG::2016-07-25 13:49:00,211::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 37684) JsonRpc (StompReactor)::ERROR::2016-07-25 13:49:01,824::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37684, 0, 0) at 0x42b0758>: unexpected eof The log is too small, we see only one full request. Pitor, can you understand from this log what the request coming from ::ffff:140.181.138.208:37684 is doing?
Greetings Christoph
On Mon, Jul 25, 2016 at 4:53 PM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
Christoph,
In log snippets you provided I can see 2 occurrences of the log entry. There is 3 seconds between the calls.
Each time I see calls to Host.getStats and Host.getHardwareInfo both from x.x.138.208. I do not see any log entries in the engine log so it not engine who connected.
What host is it?
Thanks, Piotr
On Mon, Jul 25, 2016 at 3:45 PM, C. Handel <ovirt@macht-blau.org> wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21
08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21
08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
Can you share engine and VDSM logs please? TIA, Y.
On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> wrote: > > i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San > storage backend. > > For some reason the vdsmd on the nodes is logging an error every > few > seconds: > > > vdsm vds.dispatcher ERROR SSL error during reading data: unexpected > eof > > > Running tcpdump it is a connect from the node to itself. I can't > figure > out what is wrong. Can someone give me a hint? > > Greetings > Christoph > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 6:18 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Jul 25, 2016 at 6:22 PM, C. Handel <ovirt@macht-blau.org> wrote:
two nodes (x.x.138.208, x.x.138.210), hosted-engine on x.x.139.240.
the vdsm logs are from x.x.138.208 and the connection is from the node itself.
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone ?> give me a hint?
so i know they are coming from the node itself. The process connecting is terminating too fast. the moment i see it in tcpdump, it is gone from the process table.
I think this is ovirt hosted agent - check its logs, you will probably find that it make some request in the same time you see the errors in your logs.
Adding Martin, maintaining this project.
These are the interesting events in the logs:
$ grep 37678 vdsm.log JsonRpc (StompReactor)::ERROR::2016-07-25 13:48:58,074::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37678, 0, 0) at 0x42c9b90>: unexpected eof
$ grep 37684 vdsm.log Reactor thread::INFO::2016-07-25 13:49:00,205::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:37684 Reactor thread::INFO::2016-07-25 13:49:00,211::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:37684 Reactor thread::DEBUG::2016-07-25 13:49:00,211::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 37684) JsonRpc (StompReactor)::ERROR::2016-07-25 13:49:01,824::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37684, 0, 0) at 0x42b0758>: unexpected eof
The log is too small, we see only one full request.
Pitor, can you understand from this log what the request coming from ::ffff:140.181.138.208:37684 is doing?
I stated above there are 2 verbs that were called each time: Host.getStats and Host.getHardwareInfo
Greetings Christoph
On Mon, Jul 25, 2016 at 4:53 PM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
Christoph,
In log snippets you provided I can see 2 occurrences of the log entry. There is 3 seconds between the calls.
Each time I see calls to Host.getStats and Host.getHardwareInfo both from x.x.138.208. I do not see any log entries in the engine log so it not engine who connected.
What host is it?
Thanks, Piotr
On Mon, Jul 25, 2016 at 3:45 PM, C. Handel <ovirt@macht-blau.org> wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote:
longer logs attached, excerpts:
---+ vdsm
Reactor thread::INFO::2016-07-21
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:140.181.138.208:59502 Reactor thread::DEBUG::2016-07-21
08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-07-21
08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:140.181.138.208:59502 Reactor thread::INFO::2016-07-21 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-07-21
08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:140.181.138.208', 59502) JsonRpc (StompReactor)::INFO::2016-07-21
08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received ...
JsonRpc (StompReactor)::ERROR::2016-07-21 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
---+ engine
2016-07-20 08:00:48,445 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:03,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:03,473 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:18,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:18,500 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:33,514 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687' 2016-07-20 08:01:33,527 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' 2016-07-20 08:01:48,543 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS '6f7a9201-e753-4875-b89d-024120067687'
Greetings Christoph
On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote: > > Can you share engine and VDSM logs please? > TIA, > Y. > > On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org> > wrote: >> >> i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San >> storage backend. >> >> For some reason the vdsmd on the nodes is logging an error every >> few >> seconds: >> >> >> vdsm vds.dispatcher ERROR SSL error during reading data: unexpected >> eof >> >> >> Running tcpdump it is a connect from the node to itself. I can't >> figure >> out what is wrong. Can someone give me a hint? >> >> Greetings >> Christoph >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Christoph, Please apply [1] we would know exactly what verbs were called by this local client. Thanks, Piotr [1] https://gerrit.ovirt.org/#/c/61367/ On Mon, Jul 25, 2016 at 9:50 PM, Piotr Kliczewski < piotr.kliczewski@gmail.com> wrote:
On Mon, Jul 25, 2016 at 6:22 PM, C. Handel <ovirt@macht-blau.org> wrote:
two nodes (x.x.138.208, x.x.138.210), hosted-engine on x.x.139.240.
the vdsm logs are from x.x.138.208 and the connection is from the node itself.
Running tcpdump it is a connect from the node to itself. I can't figure out what is wrong. Can someone ?> give me a hint?
so i know they are coming from the node itself. The process connecting is terminating too fast. the moment i see it in tcpdump, it is gone from
On Mon, Jul 25, 2016 at 6:18 PM, Nir Soffer <nsoffer@redhat.com> wrote: the
process table.
I think this is ovirt hosted agent - check its logs, you will probably find that it make some request in the same time you see the errors in your logs.
Adding Martin, maintaining this project.
These are the interesting events in the logs:
$ grep 37678 vdsm.log JsonRpc (StompReactor)::ERROR::2016-07-25 13:48:58,074::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37678, 0, 0) at 0x42c9b90>: unexpected eof
$ grep 37684 vdsm.log Reactor thread::INFO::2016-07-25
13:49:00,205::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ::ffff:140.181.138.208:37684 Reactor thread::INFO::2016-07-25
13:49:00,211::protocoldetector::121::ProtocolDetector.Detector::(handle_read)
Detected protocol stomp from ::ffff:140.181.138.208:37684 Reactor thread::DEBUG::2016-07-25
13:49:00,211::stompreactor::492::protocoldetector.StompDetector::(handle_socket)
Stomp detected from ('::ffff:140.181.138.208', 37684) JsonRpc (StompReactor)::ERROR::2016-07-25 13:49:01,824::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data from <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:140.181.138.208', 37684, 0, 0) at 0x42b0758>: unexpected eof
The log is too small, we see only one full request.
Pitor, can you understand from this log what the request coming from ::ffff:140.181.138.208:37684 is doing?
I stated above there are 2 verbs that were called each time:
Host.getStats and Host.getHardwareInfo
Greetings Christoph
On Mon, Jul 25, 2016 at 4:53 PM, Piotr Kliczewski <piotr.kliczewski@gmail.com> wrote:
Christoph,
In log snippets you provided I can see 2 occurrences of the log entry. There is 3 seconds between the calls.
Each time I see calls to Host.getStats and Host.getHardwareInfo both from x.x.138.208. I do not see any log entries in the engine log so it not engine who connected.
What host is it?
Thanks, Piotr
On Mon, Jul 25, 2016 at 3:45 PM, C. Handel <ovirt@macht-blau.org>
wrote:
patch applied, The connection is from the node itself.
logfiles with the last 100k (hope this is enough, the error happens every few seconds) of data attached.
Greetings Christoph
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Jul 21, 2016 at 10:00 AM, C. Handel <ovirt@macht-blau.org> wrote: > longer logs attached, excerpts: > > ---+ vdsm > > Reactor thread::INFO::2016-07-21 > > >
08:01:19,544::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from ::ffff:140.181.138.208:59502 > Reactor thread::DEBUG::2016-07-21 > > > 08:01:19,551::protocoldetector::85::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2016-07-21 > > > 08:01:19,553::protocoldetector::121::ProtocolDetector.Detector::(handle_read) > Detected protocol stomp from ::ffff:140.181.138.208:59502 > Reactor thread::INFO::2016-07-21 > 08:01:19,553::stompreactor::101::Broker.StompAdapter::(_cmd_connect) > Processing CONNECT request > Reactor thread::DEBUG::2016-07-21 > > > 08:01:19,554::stompreactor::492::protocoldetector.StompDetector::(handle_socket) > Stomp detected from ('::ffff:140.181.138.208', 59502) > JsonRpc (StompReactor)::INFO::2016-07-21 > > 08:01:19,554::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) > Subscribe command received > ... > > JsonRpc (StompReactor)::ERROR::2016-07-21 > 08:01:21,411::betterAsyncore::113::vds.dispatcher::(recv) SSL error > during > reading data: unexpected eof
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
> > > ---+ engine > > > 2016-07-20 08:00:48,445 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler1) [1cff1b56] Fetched 7 VMs from VDS > 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' > 2016-07-20 08:01:03,460 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler3) [78d8aa89] Fetched 0 VMs from VDS > '6f7a9201-e753-4875-b89d-024120067687' > 2016-07-20 08:01:03,473 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler2) [71a27a96] Fetched 7 VMs from VDS > 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' > 2016-07-20 08:01:18,488 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler2) [71a27a96] Fetched 0 VMs from VDS > '6f7a9201-e753-4875-b89d-024120067687' > 2016-07-20 08:01:18,500 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler9) [47c419da] Fetched 7 VMs from VDS > 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' > 2016-07-20 08:01:33,514 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS > '6f7a9201-e753-4875-b89d-024120067687' > 2016-07-20 08:01:33,527 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler3) [78d8aa89] Fetched 7 VMs from VDS > 'a46f9de2-4b79-45d4-bed5-f4f472eb8361' > 2016-07-20 08:01:48,543 INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] > (DefaultQuartzScheduler8) [735789] Fetched 0 VMs from VDS > '6f7a9201-e753-4875-b89d-024120067687' > > > Greetings > Christoph > > On Thu, Jul 21, 2016 at 8:47 AM, Yaniv Kaul <ykaul@redhat.com> wrote: >> >> Can you share engine and VDSM logs please? >> TIA, >> Y. >> >> On Thu, Jul 21, 2016 at 9:22 AM, C. Handel <ovirt@macht-blau.org
>> wrote: >>> >>> i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San >>> storage backend. >>> >>> For some reason the vdsmd on the nodes is logging an error every >>> few >>> seconds: >>> >>> >>> vdsm vds.dispatcher ERROR SSL error during reading data: unexpected >>> eof >>> >>> >>> Running tcpdump it is a connect from the node to itself. I can't >>> figure >>> out what is wrong. Can someone give me a hint? >>> >>> Greetings >>> Christoph >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
Hello, I take on this as I have the same problem. I'm in 4.0.3 and it seems that the gerrit above was not inside. So I applied and restarted vdsmd. Now I have Oct 07 19:54:10 ovirt01.lutwyn.org vdsm[11306]: vdsm vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher connected 192.168.1.211:36296 at 0x359c5f0>: unexpected eof In my case single host environment with Self Hosted Engine Ip of host is 192.168.1.211 Ip of engine is 192.168.1.212 Let me know if you need full logs and which ones. in the mean time 1000 line around in vdsm.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvTk9SYTF0UDZUMUU/view?usp=sharin... Thanks, Gianluca

Gianluca, Please share the engine log. We shoukd find more info about the issue there. Thanks, Piotr 7 paź 2016 20:09 "Gianluca Cecchi" <gianluca.cecchi@gmail.com> napisał(a):
On Mon, Jul 25, 2016 at 12:07 PM, Nir Soffer <nsoffer@redhat.com> wrote:
This log is not very useful as is, we must show the relevant remote address.
Should be improved in https://gerrit.ovirt.org/61303
Can you try this patch and share the log?
Hello, I take on this as I have the same problem. I'm in 4.0.3 and it seems that the gerrit above was not inside. So I applied and restarted vdsmd.
Now I have Oct 07 19:54:10 ovirt01.lutwyn.org vdsm[11306]: vdsm vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher connected 192.168.1.211:36296 at 0x359c5f0>: unexpected eof
In my case single host environment with Self Hosted Engine Ip of host is 192.168.1.211 Ip of engine is 192.168.1.212
Let me know if you need full logs and which ones.
in the mean time 1000 line around in vdsm.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvTk9SYTF0UDZUMUU/ view?usp=sharing
Thanks, Gianluca

On Fri, Oct 7, 2016 at 10:14 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
Please share the engine log. We shoukd find more info about the issue there.
Thanks, Piotr
here it is https://drive.google.com/file/d/0BwoPbcrMv8mvQlVwVDlGTVEtR00/view?usp=sharin... Gianluca

Gianluca, In the log provided I only see plenty of: 2016-10-07 23:24:43,159 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-2) [] Query 'GetUserProfileQuery' failed: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet. 2016-10-07 23:24:43,159 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-2) [] Exception: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet. This could be not related but I do not see any other exception in there. @Eli can you please take a look? Thanks, Piotr On Fri, Oct 7, 2016 at 11:28 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 7, 2016 at 10:14 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
Please share the engine log. We shoukd find more info about the issue there.
Thanks, Piotr
here it is https://drive.google.com/file/d/0BwoPbcrMv8mvQlVwVDlGTVEtR00/view?usp=sharin...
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Fixed in "Bug 1371515 - Exception on GetUserProfileQuery (unknown cause) : "The column name user_portal_vm_auto_login was not found in this ResultSet". Please upgrade to latest 4.0. On Sun, Oct 9, 2016 at 10:47 AM, Piotr Kliczewski < piotr.kliczewski@gmail.com> wrote:
Gianluca,
In the log provided I only see plenty of:
2016-10-07 23:24:43,159 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-2) [] Query 'GetUserProfileQuery' failed: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet. 2016-10-07 23:24:43,159 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-2) [] Exception: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet.
This could be not related but I do not see any other exception in there.
@Eli can you please take a look?
Thanks, Piotr
On Fri, Oct 7, 2016 at 11:28 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 7, 2016 at 10:14 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
Please share the engine log. We shoukd find more info about the issue there.
Thanks, Piotr
here it is https://drive.google.com/file/d/0BwoPbcrMv8mvQlVwVDlGTVEtR00/
view?usp=sharing
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 09/Ott/2016 09:51, "Oved Ourfali" <oourfali@redhat.com> ha scritto:
Fixed in "Bug 1371515 - Exception on GetUserProfileQuery (unknown cause)
: "The column name user_portal_vm_auto_login was not found in this ResultSet".
Please upgrade to latest 4.0.
Hello, updated to 4.0.4, I don't see the message about resultSet. But I continue to see the SSL errors, it doesn't change much. All seems running: 3 VMs included the Hosted Engine. Here the vdsm.log in gzip format: https://drive.google.com/file/d/0BwoPbcrMv8mvdDFFOEhTQ3o1ZXM/view?usp=sharin... [root@ovirt01 vdsm]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2016-10-11 23:59:02 CEST; 17min ago Process: 19335 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 19338 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 19421 (vdsm) CGroup: /system.slice/vdsmd.service ├─19421 /usr/bin/python /usr/share/vdsm/vdsm ├─19539 /usr/libexec/ioprocess --read-pipe-fd 69 --write-pipe-fd 67 --max-threads 10 --max-queued-requests 10 ├─19599 /usr/libexec/ioprocess --read-pipe-fd 98 --write-pipe-fd 97 --max-threads 10 --max-queued-requests 10 ├─19621 /usr/libexec/ioprocess --read-pipe-fd 109 --write-pipe-fd 108 --max-threads 10 --max-queued-requests 10 ├─23421 /usr/libexec/ioprocess --read-pipe-fd 44 --write-pipe-fd 43 --max-threads 10 --max-queued-requests 10 ├─23432 /usr/libexec/ioprocess --read-pipe-fd 52 --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10 ├─23440 /usr/libexec/ioprocess --read-pipe-fd 59 --write-pipe-fd 57 --max-threads 10 --max-queued-requests 10 ├─23448 /usr/libexec/ioprocess --read-pipe-fd 71 --write-pipe-fd 69 --max-threads 10 --max-queued-requests 10 ├─23457 /usr/libexec/ioprocess --read-pipe-fd 85 --write-pipe-fd 83 --max-threads 10 --max-queued-requests 10 └─23629 /usr/libexec/ioprocess --read-pipe-fd 117 --write-pipe-fd 116 --max-threads 10 --max-queued-requests 10 Oct 12 00:15:44 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:46 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:49 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:51 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:03 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:07 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:07 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:10 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:12 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:14 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof It seems that the "unexpected eof" errors I see in vdsm.log are always after an RPC call that apparently completes ok, such as jsonrpc.Executor/1::INFO::2016-10-12 00:16:38,513::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getHardwareInfo succeeded in 0.00 seconds JsonRpc (StompReactor)::ERROR::2016-10-12 00:16:38,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof Reactor thread::INFO::2016-10-12 00:16:38,734::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:59686 or jsonrpc.Executor/7::INFO::2016-10-12 00:16:36,277::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call Volume.getInfo succeeded in 0.00 seconds JsonRpc (StompReactor)::ERROR::2016-10-12 00:16:36,283::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof Reactor thread::INFO::2016-10-12 00:16:36,298::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:59676 Thanks, Gianluca

Gianluca, Please share engine log as well. Thanks, Piotr On Wed, Oct 12, 2016 at 12:33 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
Il 09/Ott/2016 09:51, "Oved Ourfali" <oourfali@redhat.com> ha scritto:
Fixed in "Bug 1371515 - Exception on GetUserProfileQuery (unknown cause)
: "The column name user_portal_vm_auto_login was not found in this ResultSet".
Please upgrade to latest 4.0.
Hello, updated to 4.0.4, I don't see the message about resultSet. But I continue to see the SSL errors, it doesn't change much.
All seems running: 3 VMs included the Hosted Engine.
Here the vdsm.log in gzip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvdDFFOEhTQ3o1ZXM/ view?usp=sharing
[root@ovirt01 vdsm]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2016-10-11 23:59:02 CEST; 17min ago Process: 19335 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 19338 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 19421 (vdsm) CGroup: /system.slice/vdsmd.service ├─19421 /usr/bin/python /usr/share/vdsm/vdsm ├─19539 /usr/libexec/ioprocess --read-pipe-fd 69 --write-pipe-fd 67 --max-threads 10 --max-queued-requests 10 ├─19599 /usr/libexec/ioprocess --read-pipe-fd 98 --write-pipe-fd 97 --max-threads 10 --max-queued-requests 10 ├─19621 /usr/libexec/ioprocess --read-pipe-fd 109 --write-pipe-fd 108 --max-threads 10 --max-queued-requests 10 ├─23421 /usr/libexec/ioprocess --read-pipe-fd 44 --write-pipe-fd 43 --max-threads 10 --max-queued-requests 10 ├─23432 /usr/libexec/ioprocess --read-pipe-fd 52 --write-pipe-fd 51 --max-threads 10 --max-queued-requests 10 ├─23440 /usr/libexec/ioprocess --read-pipe-fd 59 --write-pipe-fd 57 --max-threads 10 --max-queued-requests 10 ├─23448 /usr/libexec/ioprocess --read-pipe-fd 71 --write-pipe-fd 69 --max-threads 10 --max-queued-requests 10 ├─23457 /usr/libexec/ioprocess --read-pipe-fd 85 --write-pipe-fd 83 --max-threads 10 --max-queued-requests 10 └─23629 /usr/libexec/ioprocess --read-pipe-fd 117 --write-pipe-fd 116 --max-threads 10 --max-queued-requests 10
Oct 12 00:15:44 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:46 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:49 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:15:51 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:03 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:07 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:07 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:10 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:12 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof Oct 12 00:16:14 ovirt01.mydomain vdsm[19421]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
It seems that the "unexpected eof" errors I see in vdsm.log are always after an RPC call that apparently completes ok, such as
jsonrpc.Executor/1::INFO::2016-10-12 00:16:38,513::__init__::513:: jsonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getHardwareInfo succeeded in 0.00 seconds JsonRpc (StompReactor)::ERROR::2016-10-12 00:16:38,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof Reactor thread::INFO::2016-10-12 00:16:38,734::protocoldetector::72:: ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:59686
or
jsonrpc.Executor/7::INFO::2016-10-12 00:16:36,277::__init__::513:: jsonrpc.JsonRpcServer::(_serveRequest) RPC call Volume.getInfo succeeded in 0.00 seconds JsonRpc (StompReactor)::ERROR::2016-10-12 00:16:36,283::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: unexpected eof Reactor thread::INFO::2016-10-12 00:16:36,298::protocoldetector::72:: ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:59676
Thanks,
Gianluca

On Wed, Oct 12, 2016 at 8:55 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
Please share engine log as well.
Thanks, Piotr
Hello, I have not access right now to that server, but I have another quite similar environment (here the server is a Dell M910) with same behavior. It is at 4.0.4 too. And also this environment was created in 3.6.x and then updated to various 4.0.x up to 4.0.4 now vdsm version is now vdsm-4.18.13-1.el7.centos.x86_64 here vdsm.log in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvUThnOEVqM21JSTg/view?usp=sharin... and here engine.log in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvUUVnZnQ5cGN0NHM/view?usp=sharin... [root@ractor ~]# systemctl status vdsmd . vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2016-10-03 22:56:11 CEST; 1 weeks 1 days ago Process: 30970 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 30974 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 31166 (vdsm) CGroup: /system.slice/vdsmd.service ├─31166 /usr/bin/python /usr/share/vdsm/vdsm ├─31265 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 59 --max-threads 10 --max-queued-req... ├─31280 /usr/libexec/ioprocess --read-pipe-fd 77 --write-pipe-fd 76 --max-threads 10 --max-queued-req... ├─31371 /usr/libexec/ioprocess --read-pipe-fd 105 --write-pipe-fd 103 --max-threads 10 --max-queued-r... ├─33870 /usr/libexec/ioprocess --read-pipe-fd 40 --write-pipe-fd 39 --max-threads 10 --max-queued-req... ├─33889 /usr/libexec/ioprocess --read-pipe-fd 48 --write-pipe-fd 47 --max-threads 10 --max-queued-req... ├─33896 /usr/libexec/ioprocess --read-pipe-fd 58 --write-pipe-fd 55 --max-threads 10 --max-queued-req... ├─33904 /usr/libexec/ioprocess --read-pipe-fd 73 --write-pipe-fd 72 --max-threads 10 --max-queued-req... └─33911 /usr/libexec/ioprocess --read-pipe-fd 98 --write-pipe-fd 96 --max-threads 10 --max-queued-req... Oct 12 11:11:49 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:11:50 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:11:56 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:01 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:07 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:22 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:33 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:34 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:39 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:45 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Hint: Some lines were ellipsized, use -l to show in full. [root@ractor ~]#

It would be great if you could provide vdsm and engine from the same time. I see that engine logs end at 2016-10-12 10:51:36,336 and vdsm logs start at 2016-10-12 11:01:02,558 Thanks, Piotr On Wed, Oct 12, 2016 at 11:15 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
On Wed, Oct 12, 2016 at 8:55 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
Please share engine log as well.
Thanks, Piotr
Hello, I have not access right now to that server, but I have another quite similar environment (here the server is a Dell M910) with same behavior. It is at 4.0.4 too. And also this environment was created in 3.6.x and then updated to various 4.0.x up to 4.0.4 now vdsm version is now vdsm-4.18.13-1.el7.centos.x86_64
here vdsm.log in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvUThnOEVqM21JSTg/ view?usp=sharing
and here engine.log in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvUUVnZnQ5cGN0NHM/ view?usp=sharing
[root@ractor ~]# systemctl status vdsmd . vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2016-10-03 22:56:11 CEST; 1 weeks 1 days ago Process: 30970 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 30974 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 31166 (vdsm) CGroup: /system.slice/vdsmd.service ├─31166 /usr/bin/python /usr/share/vdsm/vdsm ├─31265 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 59 --max-threads 10 --max-queued-req... ├─31280 /usr/libexec/ioprocess --read-pipe-fd 77 --write-pipe-fd 76 --max-threads 10 --max-queued-req... ├─31371 /usr/libexec/ioprocess --read-pipe-fd 105 --write-pipe-fd 103 --max-threads 10 --max-queued-r... ├─33870 /usr/libexec/ioprocess --read-pipe-fd 40 --write-pipe-fd 39 --max-threads 10 --max-queued-req... ├─33889 /usr/libexec/ioprocess --read-pipe-fd 48 --write-pipe-fd 47 --max-threads 10 --max-queued-req... ├─33896 /usr/libexec/ioprocess --read-pipe-fd 58 --write-pipe-fd 55 --max-threads 10 --max-queued-req... ├─33904 /usr/libexec/ioprocess --read-pipe-fd 73 --write-pipe-fd 72 --max-threads 10 --max-queued-req... └─33911 /usr/libexec/ioprocess --read-pipe-fd 98 --write-pipe-fd 96 --max-threads 10 --max-queued-req...
Oct 12 11:11:49 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:11:50 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:11:56 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:01 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:07 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:22 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:33 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:34 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:39 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Oct 12 11:12:45 mynewdomain vdsm[31166]: vdsm vds.dispatcher ERROR SSL error during reading d...eof Hint: Some lines were ellipsized, use -l to show in full. [root@ractor ~]#

On Wed, Oct 12, 2016 at 12:52 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
It would be great if you could provide vdsm and engine from the same time.
I see that engine logs end at 2016-10-12 10:51:36,336 and vdsm logs start at 2016-10-12 11:01:02,558
Thanks, Piotr
Hello, vdsm writes Tb of logs... don't you remember? ;-) The next line in engine.log was after my collection, at 11:11... 2016-10-12 10:51:36,336 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-3) [da0fb3b] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. 2016-10-12 11:11:15,117 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-44) [] Session expired please try again. BTW: the system keeps 100 vdsm logs in copressed format. Each file uncompressed is about 40Mbytes. This makes up a total of about 4Gb And the oldest line is "2016-10-08 10:01:02,638". This means about 4 days ago.....

Is it possible to login to the system and take a look at the logs. If so please send me details offline. On Wed, Oct 12, 2016 at 1:07 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Oct 12, 2016 at 12:52 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
It would be great if you could provide vdsm and engine from the same time.
I see that engine logs end at 2016-10-12 10:51:36,336 and vdsm logs start at 2016-10-12 11:01:02,558
Thanks, Piotr
Hello, vdsm writes Tb of logs... don't you remember? ;-)
The next line in engine.log was after my collection, at 11:11...
2016-10-12 10:51:36,336 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-3) [da0fb3b] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. 2016-10-12 11:11:15,117 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-44) [] Session expired please try again.
BTW: the system keeps 100 vdsm logs in copressed format. Each file uncompressed is about 40Mbytes. This makes up a total of about 4Gb And the oldest line is "2016-10-08 10:01:02,638". This means about 4 days ago.....

On Wed, Oct 12, 2016 at 1:08 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Is it possible to login to the system and take a look at the logs. If so please send me details offline.
Unfortunately not to this particular system. The other one is one of my home systems and I can manage something perhaps. But are you interested in any particular log? I can upload without problems. The strange thing is that I have the same situation on two environments that are configured the same way (single host with self hosted engine) and both passed from initial configuration in 3.6.x to 4.0.x. Possibly I made the same configuration errors on both, or the problem is passing from 3. to 4.0. But the original originator of the thread perhaps had this problem (solved?) with diect installation in 4.0. He said in one of his posts: " all nodes and engines are synced to the same timeserver. as far as i can tell, the connection is from the node itself, to the vdsm on the node, so no time difference can occur. the certificate that get's presented be vdsm on port 54321 looks ok. For what it is worth, I changed the certificate of the hosted-engine web interface by reconfiguring apaches mod_ssl, not touching any certificate in hosted-engine:/etc/pki/ovirt-* " Does 3.6 to 4.0 involve anything similar to his situation? Gianluca

What I meant is that you sent logs created at different times so I was unable to match both ends (engine and vdsm. It would be great if you could sent the logs from the same time. On Wed, Oct 12, 2016 at 1:22 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Oct 12, 2016 at 1:08 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Is it possible to login to the system and take a look at the logs. If so please send me details offline.
Unfortunately not to this particular system. The other one is one of my home systems and I can manage something perhaps. But are you interested in any particular log? I can upload without problems. The strange thing is that I have the same situation on two environments that are configured the same way (single host with self hosted engine) and both passed from initial configuration in 3.6.x to 4.0.x. Possibly I made the same configuration errors on both, or the problem is passing from 3. to 4.0. But the original originator of the thread perhaps had this problem (solved?) with diect installation in 4.0. He said in one of his posts:
" all nodes and engines are synced to the same timeserver.
as far as i can tell, the connection is from the node itself, to the vdsm on the node, so no time difference can occur.
the certificate that get's presented be vdsm on port 54321 looks ok.
For what it is worth, I changed the certificate of the hosted-engine web interface by reconfiguring apaches mod_ssl, not touching any certificate in hosted-engine:/etc/pki/ovirt-* "
Does 3.6 to 4.0 involve anything similar to his situation?
Gianluca

On Wed, Oct 12, 2016 at 2:03 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
What I meant is that you sent logs created at different times so I was unable to match both ends (engine and vdsm.
It would be great if you could sent the logs from the same time.
Probably there is something I don't understand... I collect logs at 11:05 I copy both current engine.log and current vdsmd.log and update them to google drive Due to how ovirt-engine and vdsmd are logging and to how they are rotating I have 1) the current engine.log initial and final lines at that time are 2016-09-27 03:16:08,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1e566c35] Fetched 5 VMs from VDS '0d6cfc43-99bb-468e-92d8-f615ef3362ae' ... 2016-10-12 10:51:36,336 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-3) [da0fb3b] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. quite obviously no more logging during latest 14 minutes.... and so no more lines inside the file... 2) the current vdsmd.log initial and final lines are instead mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:02,558::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /us r/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/00000001-0001-0001-0001-0000000000ec/mastersd/dom_md/inbox i flag=direct,fullblock count=1 bs=1024000 (cwd None) ... mailbox.SPMMonitor::DEBUG::2016-10-12 11:05:25,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00455719 s, 225 MB/s\n'; <rc> = 0 tipically vdsmd.log rotation happens every 1 hour and in fact that particular vdsmd is now named (at 14:43) vdsm.log.3.xz and in total contains from the already shown line mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:02,558::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/00000001-0001-0001-0001-0000000000ec/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None) up to Thread-642572::INFO::2016-10-12 12:01:00,744::xmlrpc::91::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:47818 stopped when it has been rotated.... Please explain what is wrong with my approach giving you the information needed.... Do you need more vdsmd.log files in the past? Gianluca

As you listed engine logs end 14 minutes before vdsm logs start. It should be enough if you sent older vdsm log which covers engine logs time frame. Thanks, Piotr On Wed, Oct 12, 2016 at 2:46 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Oct 12, 2016 at 2:03 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
What I meant is that you sent logs created at different times so I was unable to match both ends (engine and vdsm.
It would be great if you could sent the logs from the same time.
Probably there is something I don't understand...
I collect logs at 11:05 I copy both current engine.log and current vdsmd.log and update them to google drive
Due to how ovirt-engine and vdsmd are logging and to how they are rotating I have
1) the current engine.log initial and final lines at that time are
2016-09-27 03:16:08,460 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [1e566c35] Fetched 5 VMs from VDS '0d6cfc43-99bb-468e-92d8-f615ef3362ae' ... 2016-10-12 10:51:36,336 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-3) [da0fb3b] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'.
quite obviously no more logging during latest 14 minutes.... and so no more lines inside the file...
2) the current vdsmd.log initial and final lines are instead
mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:02,558::storage_mailbox: :733::Storage.Misc.excCmd::(_checkForMail) /us r/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/00000001- 0001-0001-0001-0000000000ec/mastersd/dom_md/inbox i flag=direct,fullblock count=1 bs=1024000 (cwd None) ... mailbox.SPMMonitor::DEBUG::2016-10-12 11:05:25,507::storage_mailbox: :733::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00455719 s, 225 MB/s\n'; <rc> = 0
tipically vdsmd.log rotation happens every 1 hour and in fact that particular vdsmd is now named (at 14:43) vdsm.log.3.xz and in total contains from the already shown line
mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:02,558::storage_mailbox: :733::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-31 dd if=/rhev/data-center/00000001-0001-0001-0001-0000000000ec/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)
up to
Thread-642572::INFO::2016-10-12 12:01:00,744::xmlrpc::91::vds. XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:47818 stopped
when it has been rotated....
Please explain what is wrong with my approach giving you the information needed.... Do you need more vdsmd.log files in the past?
Gianluca

On Wed, Oct 12, 2016 at 3:59 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
As you listed engine logs end 14 minutes before vdsm logs start. It should be enough if you sent older vdsm log which covers engine logs time frame.
Thanks, Piotr
Here it is the vdsm.log.6.xz file: https://drive.google.com/file/d/0BwoPbcrMv8mvUlhvdnd4QW9GSUk/view?usp=sharin... that covers the previous hour from jsonrpc.Executor/4::DEBUG::2016-10-12 10:01:01,928::__init__::530::jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getHardwareInfo' in bridge with {} to mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:00,542::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00410016 s, 250 MB/s\n'; <rc> = 0

This log did not help me either because during this specific time there was no logs in the engine. 2016-10-12 09:51:35,296 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-13) [141b5168] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. 2016-10-12 10:42:49,188 INFO [org.ovirt.engine.core.bll.EngineBackupAwarenessManager] (DefaultQuartzScheduler8) [63f92190] Backup check started. This means that it is not the engine attempting to connect. @Simone can you please check whether this is hosted engine? Looking more in to the logs I see that after hostdeploy there are no more attempts to connect from the engine: 2016-10-03 23:28:47,891 INFO [org.ovirt.engine.core.uutils.ssh.SSHDialog] (DefaultQuartzScheduler8) [4afdc494] SSH execute 'root@ractor.mynewdomain' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-mgmt DIALOG/dialect=str:machine DIALOG/customization=bool:True' Later in the logs I can see bunch of redeploys but no sign of attempt to connect. Can you please share one of the host deploy logs? According to the logs this is the last one: /var/log/ovirt-engine/host-deploy/ovirt-host-mgmt-20161011233340-ractor.mynewdomain-23718eb3.log Thanks, Piotr On Wed, Oct 12, 2016 at 4:23 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Oct 12, 2016 at 3:59 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
As you listed engine logs end 14 minutes before vdsm logs start. It should be enough if you sent older vdsm log which covers engine logs time frame.
Thanks, Piotr
Here it is the vdsm.log.6.xz file:
https://drive.google.com/file/d/0BwoPbcrMv8mvUlhvdnd4QW9GSUk/ view?usp=sharing
that covers the previous hour from
jsonrpc.Executor/4::DEBUG::2016-10-12 10:01:01,928::__init__::530:: jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getHardwareInfo' in bridge with {}
to
mailbox.SPMMonitor::DEBUG::2016-10-12 11:01:00,542::storage_mailbox: :733::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00410016 s, 250 MB/s\n'; <rc> = 0

On Wed, Oct 12, 2016 at 5:39 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
This log did not help me either because during this specific time there was no logs in the engine.
2016-10-12 09:51:35,296 INFO [org.ovirt.engine.core.bll.storage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-13) [141b5168] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. 2016-10-12 10:42:49,188 INFO [org.ovirt.engine.core.bll.EngineBackupAwarenessManager] (DefaultQuartzScheduler8) [63f92190] Backup check started.
This means that it is not the engine attempting to connect. @Simone can you please check whether this is hosted engine?
Looking more in to the logs I see that after hostdeploy there are no more attempts to connect from the engine:
2016-10-03 23:28:47,891 INFO [org.ovirt.engine.core.uutils.ssh.SSHDialog] (DefaultQuartzScheduler8) [4afdc494] SSH execute 'root@ractor.mynewdomain' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-mgmt DIALOG/dialect=str:machine DIALOG/customization=bool:True'
Later in the logs I can see bunch of redeploys but no sign of attempt to connect.
Can you please share one of the host deploy logs?
According to the logs this is the last one: /var/log/ovirt-engine/host- deploy/ovirt-host-mgmt-20161011233340-ractor.mynewdomain-23718eb3.log
Thanks, Piotr
Here the file ovirt-host-mgmt-20161011233340-ractor.mynewdomain-23718eb3.log https://drive.google.com/file/d/0BwoPbcrMv8mvUTVMa1h3cVA2cGs/view?usp=sharin...

Gianluca, Checking the log it seems that we do not configure firewall: NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False' Please make sure that you reconfigure your firewall to open 54321 port or let host deploy to do it for you. Thanks, Piotr On Wed, Oct 12, 2016 at 7:14 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Oct 12, 2016 at 5:39 PM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
This log did not help me either because during this specific time there was no logs in the engine.
2016-10-12 09:51:35,296 INFO [org.ovirt.engine.core.bll.sto rage.domain.IsoDomainListSyncronizer] (org.ovirt.thread.pool-8-thread-13) [141b5168] Finished automatic refresh process for 'ISO' file type with success, for storage domain id 'fd5754f1-bd00-4337-ad64-1abde35438ae'. 2016-10-12 10:42:49,188 INFO [org.ovirt.engine.core.bll.EngineBackupAwarenessManager] (DefaultQuartzScheduler8) [63f92190] Backup check started.
This means that it is not the engine attempting to connect. @Simone can you please check whether this is hosted engine?
Looking more in to the logs I see that after hostdeploy there are no more attempts to connect from the engine:
2016-10-03 23:28:47,891 INFO [org.ovirt.engine.core.uutils.ssh.SSHDialog] (DefaultQuartzScheduler8) [4afdc494] SSH execute 'root@ractor.mynewdomain' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-mgmt DIALOG/dialect=str:machine DIALOG/customization=bool:True'
Later in the logs I can see bunch of redeploys but no sign of attempt to connect.
Can you please share one of the host deploy logs?
According to the logs this is the last one: /var/log/ovirt-engine/host-dep loy/ovirt-host-mgmt-20161011233340-ractor.mynewdomain-23718eb3.log
Thanks, Piotr
Here the file ovirt-host-mgmt-20161011233340-ractor.mynewdomain-23718eb3.log https://drive.google.com/file/d/0BwoPbcrMv8mvUTVMa1h3cVA2cGs/ view?usp=sharing

Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321 port or
let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?

On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321 port
or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already... root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#

Gianluca, The port needs to be open on machines where vdsm is installed. @Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore? Thanks, Piotr On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321 port
or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#

On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
The port needs to be open on machines where vdsm is installed.
@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore?
OK, I'm on it.
Thanks, Piotr
On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321 port
or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#

On Thu, Oct 13, 2016 at 2:45 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
The port needs to be open on machines where vdsm is installed.
@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore?
OK, I'm on it.
Gianluca, can you please share somehow the output of ss -at on all your hosts, your /var/log/ovirt-hosted-engine-ha/agent.log and /var/log/ovirt-hosted-engine-ha/broker.log (maybe I simply lost them within this long thread).
Thanks, Piotr
On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321
port or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#

On Thu, Oct 13, 2016 at 2:59 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 2:45 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
The port needs to be open on machines where vdsm is installed.
@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore?
OK, I'm on it.
Gianluca, can you please share somehow the output of ss -at on all your hosts, your /var/log/ovirt-hosted-engine-ha/agent.log and /var/log/ovirt-hosted-engine-ha/broker.log (maybe I simply lost them within this long thread).
Thanks, Piotr
On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321
port or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#
ss log for host: https://drive.google.com/file/d/0BwoPbcrMv8mvczVOeG1iUWZxS1U/view?usp=sharin... ss log for engine https://drive.google.com/file/d/0BwoPbcrMv8mvWGx0QWstWG1TSWc/view?usp=sharin... agent.log https://drive.google.com/file/d/0BwoPbcrMv8mvMFBrQ2lneFVwaGc/view?usp=sharin... broker.log https://drive.google.com/file/d/0BwoPbcrMv8mva2Jsc3BkNkpNZFE/view?usp=sharin... hih clarify

On Thu, Oct 13, 2016 at 3:19 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 2:59 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 2:45 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
The port needs to be open on machines where vdsm is installed.
@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore?
OK, I'm on it.
Gianluca, can you please share somehow the output of ss -at on all your hosts, your /var/log/ovirt-hosted-engine-ha/agent.log and /var/log/ovirt-hosted-engine-ha/broker.log (maybe I simply lost them within this long thread).
Thanks, the only errors that I see on agent and broker logs are: Thread-6::INFO::2016-10-13 12:29:40,783::engine_health::124::engine_health.CpuLoadNoEngine::(action) VM is up on this host with healthy engine Thread-1::ERROR::2016-10-13 12:29:42,859::notifications::39::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email) [Errno 101] Network is unreachable Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py", line 26, in send_email timeout=float(cfg["smtp-timeout"])) File "/usr/lib64/python2.7/smtplib.py", line 255, in __init__ (code, msg) = self.connect(host, port) File "/usr/lib64/python2.7/smtplib.py", line 315, in connect self.sock = self._get_socket(host, port, self.timeout) File "/usr/lib64/python2.7/smtplib.py", line 290, in _get_socket return socket.create_connection((host, port), timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable when it tries to send an email (it cannot reach the smtp server) but vdsm communication seams fine.
Thanks, Piotr
On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto: > > Gianluca, > > Checking the log it seems that we do not configure firewall: > > NETWORK/firewalldEnable=bool:'False' > NETWORK/iptablesEnable=bool:'False' > > Please make sure that you reconfigure your firewall to open 54321 port or let host deploy to do it for you. > > Thanks, > Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#
ss log for host: https://drive.google.com/file/d/0BwoPbcrMv8mvczVOeG1iUWZxS1U/ view?usp=sharing
ss log for engine https://drive.google.com/file/d/0BwoPbcrMv8mvWGx0QWstWG1TSWc/ view?usp=sharing
agent.log https://drive.google.com/file/d/0BwoPbcrMv8mvMFBrQ2lneFVwaGc/ view?usp=sharing
broker.log https://drive.google.com/file/d/0BwoPbcrMv8mva2Jsc3BkNkpNZFE/ view?usp=sharing
hih clarify

On Thu, Oct 13, 2016 at 2:45 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Oct 13, 2016 at 11:23 AM, Piotr Kliczewski <pkliczew@redhat.com> wrote:
Gianluca,
The port needs to be open on machines where vdsm is installed.
@Simone can you take a look why after running host deploy at 2016-10-03 23:28:47,891 we are not able to talk to vdsm anymore?
OK, I'm on it.
Thanks, Piotr
On Thu, Oct 13, 2016 at 11:15 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Thu, Oct 13, 2016 at 11:13 AM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Il 13/Ott/2016 11:00, "Piotr Kliczewski" <pkliczew@redhat.com> ha scritto:
Gianluca,
Checking the log it seems that we do not configure firewall:
NETWORK/firewalldEnable=bool:'False' NETWORK/iptablesEnable=bool:'False'
Please make sure that you reconfigure your firewall to open 54321
port or let host deploy to do it for you.
Thanks, Piotr
Hi, at this moment Ihave: On hypervisor iptables service configured and active. On engine firewalld service configured and active. Do I have to open port 54321 on host?
Actually it is already...
root@ovirt01 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 ACCEPT all -- 192.168.1.212 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 2223 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5900:6923 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 [root@ovirt01 ~]#
In the mean time I confirmed that even without ipv6 the situation doesn't change global maintenance stop ovirt-engine service create no-ipv6.conf under /etc/sysctl.d of engine systemctl restart network no more ipv6 shutdown engine exit from maintenance and after a while engine is powered on on host vdsm 6767 vdsm 24u IPv4 15528247 0t0 TCP *:54321 (LISTEN) vdsm 6767 vdsm 82u IPv4 15528876 0t0 TCP ovirt01.mydomain:54321->ovirt.mydomain:52980 (ESTABLISHED) vdsm 6767 vdsm 110u IPv4 15534849 0t0 TCP ovirt01.mydomain:54321->ovirt.mydomain:52984 (ESTABLISHED) on engine now [root@ovirt host-deploy]# netstat -an|grep 54321 tcp 0 0 192.168.1.212:52984 192.168.1.211:54321 ESTABLISHED tcp 0 0 192.168.1.212:52980 192.168.1.211:54321 ESTABLISHED [root@ovirt host-deploy]# but vdsmd has the same errors. Also restarting vdsmd Oct 13 14:49:20 ovirt01.mydomain vdsm[6767]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof how can I force the creation of the ovirt-host-mgtmt file? I just see that has been generated this one file ovirt-host-mgmt-20161013124548-ovirt01.mydomain-null.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvbXI3cndGcEtXbWs/view?usp=sharin...
participants (9)
-
C. Handel
-
Gianluca Cecchi
-
Nir Soffer
-
Oved Ourfali
-
Piotr Kliczewski
-
Piotr Kliczewski
-
Simone Tiraboschi
-
Yaniv Kaul
-
Швыгин Алексей Михайлович