Users
Threads by month
- ----- 2026 -----
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 6 participants
- 19163 discussions
Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface
by Markus Stockhausen 29 Sep '15
by Markus Stockhausen 29 Sep '15
29 Sep '15
------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743
Content-Type: multipart/alternative;
boundary="_000_354c6d82d06343f59dfebdb779762820emailandroidcom_"
--_000_354c6d82d06343f59dfebdb779762820emailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RG8geW91IGhhdmUgYSBjaGFuY2UgdG8gaW5zdGFsbCBxZW11LWRlYnVnPyBJZiB5ZXMgSSB3b3Vs
ZCB0cnkgYSBiYWNrdHJhY2UuDQoNCmdkYiAtcCA8cWVtdS1waWQ+DQojIGJ0DQoNCk1hcmt1cw0K
DQpBbSAxNS4wOS4yMDE1IDQ6MTUgbmFjaG0uIHNjaHJpZWIgRGFuaWVsIEhlbGdlbmJlcmdlciA8
ZGFuaWVsLmhlbGdlbmJlcmdlckBtLWJveC5kZT46DQpIZWxsbywNCg0KSSBkbyBub3Qgd2FudCB0
byBoaWphY2sgdGhlIHRocmVhZCBidXQgbWF5YmUgbXkgaXNzdWUgaXMgcmVsYXRlZD8NCg0KSXQg
bWlnaHQgaGF2ZSBzdGFydGVkIHdpdGggb3ZpcnQgMy41LjM7IGJ1dCBJIGNhbm5vdCB0ZWxsIGZv
ciBzdXJlLg0KDQpGb3IgbWUsIG9uZSB2bSAoZm9yZW1hbikgaXMgYWZmZWN0ZWQ7IHRoZSBzZWNv
bmQgdGltZSBpbiAxNCBkYXlzLiBJIGNhbiBjb25maXJtIHRoaXMgYXMgSSBhbHNvIGxvb3NlIGFu
eSBuZXR3b3JrIGNvbm5lY3Rpb24gdG8gdGhlIFZNIGFuZA0KdGhlIGFiaWxpdHkgdG8gY29ubmVj
dCBhIGNvbnNvbGUuDQpBbHNvLCB0aGUgb25seSB0aGluZyB3aXRjaCAnZml4ZXMnIHRoZSBpc3N1
ZSBpcyByaWdodCBub3cgJ2tpbGwgLTkgPHBpZCBvZiBxZW11LWt2bSBwcm9jZXNzPicNCg0KQXMg
ZmFyIGFzIEkgY2FuIHRlbGwgdGhlIFZNIGJlY2FtZSB1bnJlc3BvbnNpdmUgYXQgYXJvdW5kIFNl
cCAxNSAxMjozMDowMTsgZW5naW5lIGxvZ2dlZCB0aGlzIGF0IDEyOjM0LiBOb3RoaW5nIG9idmlv
dXMgaW4gVkRTTSBsb2dzIChzZWUNCmF0dGFjaGVkKS4NCg0KQmVsb3cgdGhlIGVuZ2luZS5sb2cg
cGFydC4NCg0KVmVyc2lvbnM6DQpvdmlydC1lbmdpbmUtMy41LjQuMi0xLmVsNy5jZW50b3Mubm9h
cmNoDQoNCnZkc20tNC4xNi4yNi0wLmVsNy5jZW50b3MNCmxpYnZpcnQtMS4yLjgtMTYuZWw3XzEu
Mw0KDQplbmdpbmUubG9nICgxMjAwIC0gMTMwMDoNCjIwMTUtMDktMTUgMTI6MDM6NDcsOTQ5IElO
Rk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhh
bmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNTYpIFsyNjRkNTAyYV0gSEEN
CnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1
IDEyOjA4OjAyLDcwOCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5PdmZEYXRhVXBk
YXRlcl0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTg5KSBbMmU3YmY1NmVdIEF0dGVt
cHRpbmcgdG8gdXBkYXRlDQpWTXMvVGVtcGxhdGVzIE92Zi4NCjIwMTUtMDktMTUgMTI6MDg6MDIs
NzA5IElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLlByb2Nlc3NPdmZVcGRhdGVGb3JT
dG9yYWdlUG9vbENvbW1hbmRdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci04OSkNCls1
ZTlmNGJhNl0gUnVubmluZyBjb21tYW5kOiBQcm9jZXNzT3ZmVXBkYXRlRm9yU3RvcmFnZVBvb2xD
b21tYW5kIGludGVybmFsOiB0cnVlLiBFbnRpdGllcyBhZmZlY3RlZCA6ICBJRDoNCjAwMDAwMDAy
LTAwMDItMDAwMi0wMDAyLTAwMDAwMDAwMDA4OCBUeXBlOiBsDQoyMDE1LTA5LTE1IDEyOjA4OjAy
LDc4MCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5Qcm9jZXNzT3ZmVXBkYXRlRm9y
U3RvcmFnZVBvb2xDb21tYW5kXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItODkpDQpb
NWU5ZjRiYTZdIExvY2sgZnJlZWQgdG8gb2JqZWN0IEVuZ2luZUxvY2sgW2V4Y2x1c2l2ZUxvY2tz
PSBrZXk6IDAwMDAwMDAyLTAwMDItMDAwMi0wMDAyLTAwMDAwMDAwMDA4OCB2YWx1ZTogT1ZGX1VQ
REFURQ0KMjAxNS0wOS0xNSAxMjowODo0Nyw5OTcgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29y
ZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2No
ZWR1bGVyX1dvcmtlci0yMSkgWzNmYzg1NGEyXSBIQQ0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBj
bHVzdGVyIERlZmF1bHQgaXMgT0sNCjIwMTUtMDktMTUgMTI6MTM6MDYsOTk4IElORk8gIFtvcmcu
b3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5HZXRGaWxlU3RhdHNWRFNDb21t
YW5kXSAob3JnLm92aXJ0LnRocmVhZC5wb29sLTgtdGhyZWFkLTQ4KQ0KWzUwMjIxY2RjXSBTVEFS
VCwgR2V0RmlsZVN0YXRzVkRTQ29tbWFuZCggc3RvcmFnZVBvb2xJZCA9IDAwMDAwMDAyLTAwMDIt
MDAwMi0wMDAyLTAwMDAwMDAwMDA4OCwgaWdub3JlRmFpbG92ZXJMaW1pdCA9IGZhbHNlKSwgbG9n
IGlkOiAxNTAzOTY4DQoyMDE1LTA5LTE1IDEyOjEzOjA3LDEzNyBJTkZPICBbb3JnLm92aXJ0LmVu
Z2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9rZXIuR2V0RmlsZVN0YXRzVkRTQ29tbWFuZF0gKG9y
Zy5vdmlydC50aHJlYWQucG9vbC04LXRocmVhZC00OCkNCls1MDIyMWNkY10gRklOSVNILCBHZXRG
aWxlU3RhdHNWRFNDb21tYW5kLCByZXR1cm46IHtwZlNlbnNlLTIuMC1SRUxFQVNFLWkzODYuaXNv
PXtzdGF0dXM9MCwgY3RpbWU9MTQzMjI4Njg4Ny4wLCBzaXplPTExNTcwOTk1Mn0sDQpGZWRvcmEt
MTUtaTY4Ni1MaXZlOA0KMjAxNS0wOS0xNSAxMjoxMzowNywxNzggSU5GTyAgW29yZy5vdmlydC5l
bmdpbmUuY29yZS5ibGwuSXNvRG9tYWluTGlzdFN5bmNyb25pemVyXSAob3JnLm92aXJ0LnRocmVh
ZC5wb29sLTgtdGhyZWFkLTQ4KSBbNTAyMjFjZGNdIEZpbmlzaGVkDQphdXRvbWF0aWMgcmVmcmVz
aCBwcm9jZXNzIGZvciBJU08gZmlsZSB0eXBlIHdpdGggc3VjY2VzcywgZm9yIHN0b3JhZ2UgZG9t
YWluIGlkIDg0ZGNiMmZjLWZiNjMtNDQyZi1hYTc3LTNlODRkYzdkNWE3Mi4NCjIwMTUtMDktMTUg
MTI6MTM6NDgsMDQzIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcu
SGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItODcp
IFs0ZmExYmIxNl0gSEENCnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlz
IE9LDQoyMDE1LTA5LTE1IDEyOjE4OjQ4LDA4OCBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3Jl
LmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hl
ZHVsZXJfV29ya2VyLTQ0KSBbNjM0NWU2OThdIEhBDQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNs
dXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjoyMzo0OCwxMzcgSU5GTyAgW29yZy5v
dmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChE
ZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0xMykgSEEgcmVzZXJ2YXRpb24NCnN0YXR1cyBm
b3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEyOjI4OjQ4LDE4MyBJTkZPICBb
b3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGlu
Z10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTc2KSBbMTU0YzkxZDVdIEhBDQpyZXNl
cnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjoz
Mzo0OCwyMjkgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJl
c2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0zNikgWzI3
YzczYWM2XSBIQQ0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0sN
CjIwMTUtMDktMTUgMTI6MzQ6NDksNDMyIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRz
YnJva2VyLlZkc1VwZGF0ZVJ1blRpbWVJbmZvXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3Jr
ZXItNDEpIFs1ZjJhNGI2OF0gVk0NCmZvcmVtYW4gOGI1N2ZmMWQtMjgwMC00OGFkLWIyNjctZmQ4
ZTllMmY2ZmIyIG1vdmVkIGZyb20gVXAgLS0+IE5vdFJlc3BvbmRpbmcNCjIwMTUtMDktMTUgMTI6
MzQ6NDksNTc4IFdBUk4gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuZGFsLmRiYnJva2VyLmF1ZGl0
bG9naGFuZGxpbmcuQXVkaXRMb2dEaXJlY3Rvcl0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29y
a2VyLTQxKQ0KWzVmMmE0YjY4XSBDb3JyZWxhdGlvbiBJRDogbnVsbCwgQ2FsbCBTdGFjazogbnVs
bCwgQ3VzdG9tIEV2ZW50IElEOiAtMSwgTWVzc2FnZTogVk0gZm9yZW1hbiBpcyBub3QgcmVzcG9u
ZGluZy4NCjIwMTUtMDktMTUgMTI6Mzg6NDgsMjczIElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNv
cmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNj
aGVkdWxlcl9Xb3JrZXItMTApIFs3YTgwMDc2Nl0gSEENCnJlc2VydmF0aW9uIHN0YXR1cyBmb3Ig
Y2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEyOjQzOjQ4LDMyMCBJTkZPICBbb3Jn
Lm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10g
KERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTQyKSBbNDQwZjFjNDBdIEhBDQpyZXNlcnZh
dGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSw0KMjAxNS0wOS0xNSAxMjo0ODo0
OCwzNjYgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2Vy
dmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci03MCkgSEEgcmVz
ZXJ2YXRpb24NCnN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LDQoyMDE1LTA5LTE1IDEy
OjUzOjQ4LDQxMiBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhh
UmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTEyKSBb
NTAyMjFjZGNdIEhBDQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBP
Sw0KMjAxNS0wOS0xNSAxMjo1ODo0OCw0NTkgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS5i
bGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1
bGVyX1dvcmtlci0zKSBIQSByZXNlcnZhdGlvbg0Kc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQg
aXMgT0sNCg0KDQoNCk9uIDI5LjA4LjIwMTUgMjI6NDgsIENocmlzdGlhbiBIYWlsZXIgd3JvdGU6
DQo+IEhlbGxvLA0KPg0KPiBsYXN0IFdlZG5lc2RheSBJIHdhbnRlZCB0byB1cGRhdGUgbXkgb1Zp
cnQgMy41IGh5cGVydmlzb3IuIEl0IGlzIGEgc2luZ2xlIENlbnRvcw0KPiA3IHNlcnZlciwgc28g
SSBzdGFydGVkIGJ5IHN1c3BlbmRpbmcgdGhlIFZNcyBpbiBvcmRlciB0byBzZXQgdGhlIG9WaXJ0
IGVuZ2luZQ0KPiBob3N0IHRvIG1haW50ZW5hbmNlIG1vZGUuIER1cmluZyB0aGUgcHJvY2VzcyBv
ZiBzdXNwZW5kaW5nIHRoZSBWTXMgdGhlIHNlcnZlcg0KPiBjcmFzaGVkLCBrZXJuZWwgcGFuaWPi
gKYNCj4NCj4gQWZ0ZXIgcmVzdGFydGluZyB0aGUgc2VydmVyIEkgaW5zdGFsbGVkIHRoZSB1cGRh
dGVzIHZpYSB5dW0gYW4gcmVzdGFydGVkIHRoZQ0KPiBzZXJ2ZXIgYWdhaW4uIEFmdGVyd2FyZHMs
IGFsbCB0aGUgVk1zIGNvdWxkIGJlIHN0YXJ0ZWQgYWdhaW4uIFNvbWUgaG91cnMgbGF0ZXINCj4g
bXkgbW9uaXRvcmluZyBzeXN0ZW0gcmVnaXN0ZXJlZCBzb21lIHVucmVzcG9uc2l2ZSBob3N0cywg
SSBoYWQgYSBsb29rIGluIHRoZQ0KPiBvVmlydCBpbnRlcmZhY2UsIDMgb2YgdGhlIFZNcyB3ZXJl
IGluIHRoZSBzdGF0ZSDigJxub3QgcmVzcG9uZGluZ+KAnSwgbWFya2VkIGJ5IGENCj4gcXVlc3Rp
b24gbWFyay4NCj4NCj4gSSB0cmllZCB0byBzaHV0IGRvd24gdGhlIFZNcywgYnV0IG9WaXJ0IHdh
c27igJl0IGFibGUgdG8gZG8gc28uIEkgdHJpZWQgdG8gcmVzZXQNCj4gdGhlIHN0YXR1cyBpbiB0
aGUgZGF0YWJhc2Ugd2l0aCB0aGUgc3FsIHN0YXRlbWVudA0KPg0KPiB1cGRhdGUgdm1fZHluYW1p
YyBzZXQgc3RhdHVzID0gMCB3aGVyZSB2bV9ndWlkID0gKHNlbGVjdCB2bV9ndWlkIGZyb20gdm1f
c3RhdGljDQo+IHdoZXJlIHZtX25hbWUgPSAnTVlWTU5BTUUnKTsNCj4NCj4gYnV0IHRoYXQgZGlk
buKAmXQgaGVscCwgZWl0aGVyLiBPbmx5IHJlYm9vdGluZyB0aGUgd2hvbGUgaHlwZXJ2aXNvciBo
ZWxwZWTigKYNCj4gYWZ0ZXJ3YXJkcyBldmVyeXRoaW5nIHdvcmtlZCBhZ2Fpbi4gQnV0IG9ubHkg
Zm9yIGEgZmV3IGhvdXJzLCB0aGVuIG9uZSBvZiB0aGUNCj4gVk1zIGVudGVyZWQgdGhlIOKAnG5v
dCByZXNwb25kaW5n4oCdIHN0YXRlIGFnYWlu4oCmIGFnYWluIG9ubHkgYSByZWJvb3QgaGVscGVk
Lg0KPiBZZXN0ZXJkYXkgaXQgaGFwcGVuZWQgYWdhaW46DQo+DQo+IDIwMTUtMDgtMjggMTc6NDQ6
MjIsNjY0IElORk8NCj4gW29yZy5vdmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIuVmRzVXBkYXRl
UnVuVGltZUluZm9dDQo+IChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci02MCkgWzRlZjkw
YjEyXSBWTSBEQw0KPiAwZjNkMWYwNi1lNTE2LTQ4Y2UtYWE2Zi03MjczYzMzZDM0OTEgbW92ZWQg
ZnJvbSBVcCAtLT4gTm90UmVzcG9uZGluZw0KPg0KPiAyMDE1LTA4LTI4IDE3OjQ0OjIyLDY5MiBX
QVJODQo+IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuZGFsLmRiYnJva2VyLmF1ZGl0bG9naGFuZGxp
bmcuQXVkaXRMb2dEaXJlY3Rvcl0NCj4gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTYw
KSBbNGVmOTBiMTJdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOg0KPiBudWxsLCBD
dXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBEQyBpcyBub3QgcmVzcG9uZGluZy4NCj4N
Cj4gRG9lcyBhbnlib2R5IGtub3cgd2hhdCBJIGNhbiBkbz8gV2hlcmUgc2hvdWxkIEkgaGF2ZSBh
IGxvb2s/IEhpbnRzIGFyZSBncmVhdGx5DQo+IGFwcHJlY2lhdGVkIQ0KPg0KPiBUaGFua3MsDQo+
DQo+IENocmlzdGlhbg0KPg0KDQotLQ0KRGFuaWVsIEhlbGdlbmJlcmdlcg0KbSBib3ggYmV3ZWd0
YmlsZCBHbWJIDQoNClA6ICs0OS8zMC8yNDA4NzgxLTIyDQpGOiArNDkvMzAvMjQwODc4MS0xMA0K
DQpBQ0tFUlNUUi4gMTkNCkQtMTAxMTUgQkVSTElODQoNCg0Kd3d3Lm0tYm94LmRlPGh0dHA6Ly93
d3cubS1ib3guZGU+ICB3d3cubW9ua2V5bWVuLnR2PGh0dHA6Ly93d3cubW9ua2V5bWVuLnR2Pg0K
DQpHZXNjaMOkZnRzZsO8aHJlcjogTWFydGluIFJldHNjaGl0emVnZ2VyIC8gTWljaGFlbGEgR8O2
bGxuZXINCkhhbmRlc2xyZWdpc3RlcjogQW10c2dlcmljaHQgQ2hhcmxvdHRlbmJ1cmcgLyBIUkIg
MTEyNzY3DQo=
--_000_354c6d82d06343f59dfebdb779762820emailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <BB39A1C46290544BB19A578F6A0B74C4(a)collogia.de>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
PkRvIHlvdSBoYXZlIGEgY2hhbmNlIHRvIGluc3RhbGwgcWVtdS1kZWJ1Zz8gSWYgeWVzIEkgd291
bGQgdHJ5IGEgYmFja3RyYWNlLjwvcD4NCjxwIGRpcj0ibHRyIj5nZGIgLXAgJmx0O3FlbXUtcGlk
Jmd0Ozxicj4NCiMgYnQ8L3A+DQo8cCBkaXI9Imx0ciI+TWFya3VzPC9wPg0KPGRpdiBjbGFzcz0i
Z21haWxfcXVvdGUiPkFtIDE1LjA5LjIwMTUgNDoxNSBuYWNobS4gc2NocmllYiBEYW5pZWwgSGVs
Z2VuYmVyZ2VyICZsdDtkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlJmd0Ozo8YnIgdHlwZT0i
YXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAg
MCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8
ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTBwdCI+PC9zcGFuPjwv
Zm9udD4NCjxkaXY+SGVsbG8sPGJyPg0KPGJyPg0KSSBkbyBub3Qgd2FudCB0byBoaWphY2sgdGhl
IHRocmVhZCBidXQgbWF5YmUgbXkgaXNzdWUgaXMgcmVsYXRlZD88YnI+DQo8YnI+DQpJdCBtaWdo
dCBoYXZlIHN0YXJ0ZWQgd2l0aCBvdmlydCAzLjUuMzsgYnV0IEkgY2Fubm90IHRlbGwgZm9yIHN1
cmUuPGJyPg0KPGJyPg0KRm9yIG1lLCBvbmUgdm0gKGZvcmVtYW4pIGlzIGFmZmVjdGVkOyB0aGUg
c2Vjb25kIHRpbWUgaW4gMTQgZGF5cy4gSSBjYW4gY29uZmlybSB0aGlzIGFzIEkgYWxzbyBsb29z
ZSBhbnkgbmV0d29yayBjb25uZWN0aW9uIHRvIHRoZSBWTSBhbmQ8YnI+DQp0aGUgYWJpbGl0eSB0
byBjb25uZWN0IGEgY29uc29sZS48YnI+DQpBbHNvLCB0aGUgb25seSB0aGluZyB3aXRjaCAnZml4
ZXMnIHRoZSBpc3N1ZSBpcyByaWdodCBub3cgJ2tpbGwgLTkgJmx0O3BpZCBvZiBxZW11LWt2bSBw
cm9jZXNzJmd0Oyc8YnI+DQo8YnI+DQpBcyBmYXIgYXMgSSBjYW4gdGVsbCB0aGUgVk0gYmVjYW1l
IHVucmVzcG9uc2l2ZSBhdCBhcm91bmQgU2VwIDE1IDEyOjMwOjAxOyBlbmdpbmUgbG9nZ2VkIHRo
aXMgYXQgMTI6MzQuIE5vdGhpbmcgb2J2aW91cyBpbiBWRFNNIGxvZ3MgKHNlZTxicj4NCmF0dGFj
aGVkKS48YnI+DQo8YnI+DQpCZWxvdyB0aGUgZW5naW5lLmxvZyBwYXJ0Ljxicj4NCjxicj4NClZl
cnNpb25zOjxicj4NCm92aXJ0LWVuZ2luZS0zLjUuNC4yLTEuZWw3LmNlbnRvcy5ub2FyY2g8YnI+
DQo8YnI+DQp2ZHNtLTQuMTYuMjYtMC5lbDcuY2VudG9zPGJyPg0KbGlidmlydC0xLjIuOC0xNi5l
bDdfMS4zPGJyPg0KPGJyPg0KZW5naW5lLmxvZyAoMTIwMCAtIDEzMDA6PGJyPg0KMjAxNS0wOS0x
NSAxMjowMzo0Nyw5NDkgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hl
ZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29y
a2VyLTU2KSBbMjY0ZDUwMmFdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVy
IERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjA4OjAyLDcwOCBJTkZPJm5ic3A7IFtv
cmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLk92ZkRhdGFVcGRhdGVyXSAoRGVmYXVsdFF1YXJ0elNj
aGVkdWxlcl9Xb3JrZXItODkpIFsyZTdiZjU2ZV0gQXR0ZW1wdGluZyB0byB1cGRhdGU8YnI+DQpW
TXMvVGVtcGxhdGVzIE92Zi48YnI+DQoyMDE1LTA5LTE1IDEyOjA4OjAyLDcwOSBJTkZPJm5ic3A7
IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLlByb2Nlc3NPdmZVcGRhdGVGb3JTdG9yYWdlUG9v
bENvbW1hbmRdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci04OSk8YnI+DQpbNWU5ZjRi
YTZdIFJ1bm5pbmcgY29tbWFuZDogUHJvY2Vzc092ZlVwZGF0ZUZvclN0b3JhZ2VQb29sQ29tbWFu
ZCBpbnRlcm5hbDogdHJ1ZS4gRW50aXRpZXMgYWZmZWN0ZWQgOiZuYnNwOyBJRDo8YnI+DQowMDAw
MDAwMi0wMDAyLTAwMDItMDAwMi0wMDAwMDAwMDAwODggVHlwZTogbDxicj4NCjIwMTUtMDktMTUg
MTI6MDg6MDIsNzgwIElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuUHJvY2Vz
c092ZlVwZGF0ZUZvclN0b3JhZ2VQb29sQ29tbWFuZF0gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJf
V29ya2VyLTg5KTxicj4NCls1ZTlmNGJhNl0gTG9jayBmcmVlZCB0byBvYmplY3QgRW5naW5lTG9j
ayBbZXhjbHVzaXZlTG9ja3M9IGtleTogMDAwMDAwMDItMDAwMi0wMDAyLTAwMDItMDAwMDAwMDAw
MDg4IHZhbHVlOiBPVkZfVVBEQVRFPGJyPg0KMjAxNS0wOS0xNSAxMjowODo0Nyw5OTcgSU5GTyZu
YnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25I
YW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTIxKSBbM2ZjODU0YTJdIEhB
PGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoy
MDE1LTA5LTE1IDEyOjEzOjA2LDk5OCBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUu
dmRzYnJva2VyLnZkc2Jyb2tlci5HZXRGaWxlU3RhdHNWRFNDb21tYW5kXSAob3JnLm92aXJ0LnRo
cmVhZC5wb29sLTgtdGhyZWFkLTQ4KTxicj4NCls1MDIyMWNkY10gU1RBUlQsIEdldEZpbGVTdGF0
c1ZEU0NvbW1hbmQoIHN0b3JhZ2VQb29sSWQgPSAwMDAwMDAwMi0wMDAyLTAwMDItMDAwMi0wMDAw
MDAwMDAwODgsIGlnbm9yZUZhaWxvdmVyTGltaXQgPSBmYWxzZSksIGxvZyBpZDogMTUwMzk2ODxi
cj4NCjIwMTUtMDktMTUgMTI6MTM6MDcsMTM3IElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUu
Y29yZS52ZHNicm9rZXIudmRzYnJva2VyLkdldEZpbGVTdGF0c1ZEU0NvbW1hbmRdIChvcmcub3Zp
cnQudGhyZWFkLnBvb2wtOC10aHJlYWQtNDgpPGJyPg0KWzUwMjIxY2RjXSBGSU5JU0gsIEdldEZp
bGVTdGF0c1ZEU0NvbW1hbmQsIHJldHVybjoge3BmU2Vuc2UtMi4wLVJFTEVBU0UtaTM4Ni5pc289
e3N0YXR1cz0wLCBjdGltZT0xNDMyMjg2ODg3LjAsIHNpemU9MTE1NzA5OTUyfSw8YnI+DQpGZWRv
cmEtMTUtaTY4Ni1MaXZlODxicj4NCjIwMTUtMDktMTUgMTI6MTM6MDcsMTc4IElORk8mbmJzcDsg
W29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuSXNvRG9tYWluTGlzdFN5bmNyb25pemVyXSAob3Jn
Lm92aXJ0LnRocmVhZC5wb29sLTgtdGhyZWFkLTQ4KSBbNTAyMjFjZGNdIEZpbmlzaGVkPGJyPg0K
YXV0b21hdGljIHJlZnJlc2ggcHJvY2VzcyBmb3IgSVNPIGZpbGUgdHlwZSB3aXRoIHN1Y2Nlc3Ms
IGZvciBzdG9yYWdlIGRvbWFpbiBpZCA4NGRjYjJmYy1mYjYzLTQ0MmYtYWE3Ny0zZTg0ZGM3ZDVh
NzIuPGJyPg0KMjAxNS0wOS0xNSAxMjoxMzo0OCwwNDMgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVu
Z2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRR
dWFydHpTY2hlZHVsZXJfV29ya2VyLTg3KSBbNGZhMWJiMTZdIEhBPGJyPg0KcmVzZXJ2YXRpb24g
c3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjE4OjQ4
LDA4OCBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFS
ZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDQpIFs2
MzQ1ZTY5OF0gSEE8YnI+DQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBp
cyBPSzxicj4NCjIwMTUtMDktMTUgMTI6MjM6NDgsMTM3IElORk8mbmJzcDsgW29yZy5vdmlydC5l
bmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0
UXVhcnR6U2NoZWR1bGVyX1dvcmtlci0xMykgSEEgcmVzZXJ2YXRpb248YnI+DQpzdGF0dXMgZm9y
IGNsdXN0ZXIgRGVmYXVsdCBpcyBPSzxicj4NCjIwMTUtMDktMTUgMTI6Mjg6NDgsMTgzIElORk8m
bmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9u
SGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci03NikgWzE1NGM5MWQ1XSBI
QTxicj4NCnJlc2VydmF0aW9uIHN0YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LPGJyPg0K
MjAxNS0wOS0xNSAxMjozMzo0OCwyMjkgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3Jl
LmJsbC5zY2hlZHVsaW5nLkhhUmVzZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hl
ZHVsZXJfV29ya2VyLTM2KSBbMjdjNzNhYzZdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZv
ciBjbHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjM0OjQ5LDQzMiBJTkZP
Jm5ic3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLlZkc1VwZGF0ZVJ1blRpbWVJ
bmZvXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDEpIFs1ZjJhNGI2OF0gVk08YnI+
DQpmb3JlbWFuIDhiNTdmZjFkLTI4MDAtNDhhZC1iMjY3LWZkOGU5ZTJmNmZiMiBtb3ZlZCBmcm9t
IFVwIC0tJmd0OyBOb3RSZXNwb25kaW5nPGJyPg0KMjAxNS0wOS0xNSAxMjozNDo0OSw1NzggV0FS
TiZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmRhbC5kYmJyb2tlci5hdWRpdGxvZ2hhbmRs
aW5nLkF1ZGl0TG9nRGlyZWN0b3JdIChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci00MSk8
YnI+DQpbNWYyYTRiNjhdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOiBudWxsLCBD
dXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBmb3JlbWFuIGlzIG5vdCByZXNwb25kaW5n
Ljxicj4NCjIwMTUtMDktMTUgMTI6Mzg6NDgsMjczIElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdp
bmUuY29yZS5ibGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVh
cnR6U2NoZWR1bGVyX1dvcmtlci0xMCkgWzdhODAwNzY2XSBIQTxicj4NCnJlc2VydmF0aW9uIHN0
YXR1cyBmb3IgY2x1c3RlciBEZWZhdWx0IGlzIE9LPGJyPg0KMjAxNS0wOS0xNSAxMjo0Mzo0OCwz
MjAgSU5GTyZuYnNwOyBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLmJsbC5zY2hlZHVsaW5nLkhhUmVz
ZXJ2YXRpb25IYW5kbGluZ10gKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTQyKSBbNDQw
ZjFjNDBdIEhBPGJyPg0KcmVzZXJ2YXRpb24gc3RhdHVzIGZvciBjbHVzdGVyIERlZmF1bHQgaXMg
T0s8YnI+DQoyMDE1LTA5LTE1IDEyOjQ4OjQ4LDM2NiBJTkZPJm5ic3A7IFtvcmcub3ZpcnQuZW5n
aW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhhbmRsaW5nXSAoRGVmYXVsdFF1
YXJ0elNjaGVkdWxlcl9Xb3JrZXItNzApIEhBIHJlc2VydmF0aW9uPGJyPg0Kc3RhdHVzIGZvciBj
bHVzdGVyIERlZmF1bHQgaXMgT0s8YnI+DQoyMDE1LTA5LTE1IDEyOjUzOjQ4LDQxMiBJTkZPJm5i
c3A7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUuYmxsLnNjaGVkdWxpbmcuSGFSZXNlcnZhdGlvbkhh
bmRsaW5nXSAoRGVmYXVsdFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItMTIpIFs1MDIyMWNkY10gSEE8
YnI+DQpyZXNlcnZhdGlvbiBzdGF0dXMgZm9yIGNsdXN0ZXIgRGVmYXVsdCBpcyBPSzxicj4NCjIw
MTUtMDktMTUgMTI6NTg6NDgsNDU5IElORk8mbmJzcDsgW29yZy5vdmlydC5lbmdpbmUuY29yZS5i
bGwuc2NoZWR1bGluZy5IYVJlc2VydmF0aW9uSGFuZGxpbmddIChEZWZhdWx0UXVhcnR6U2NoZWR1
bGVyX1dvcmtlci0zKSBIQSByZXNlcnZhdGlvbjxicj4NCnN0YXR1cyBmb3IgY2x1c3RlciBEZWZh
dWx0IGlzIE9LPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KT24gMjkuMDguMjAxNSAyMjo0OCwgQ2hy
aXN0aWFuIEhhaWxlciB3cm90ZTo8YnI+DQomZ3Q7IEhlbGxvLDxicj4NCiZndDsgPGJyPg0KJmd0
OyBsYXN0IFdlZG5lc2RheSBJIHdhbnRlZCB0byB1cGRhdGUgbXkgb1ZpcnQgMy41IGh5cGVydmlz
b3IuIEl0IGlzIGEgc2luZ2xlIENlbnRvcw0KPGJyPg0KJmd0OyA3IHNlcnZlciwgc28gSSBzdGFy
dGVkIGJ5IHN1c3BlbmRpbmcgdGhlIFZNcyBpbiBvcmRlciB0byBzZXQgdGhlIG9WaXJ0IGVuZ2lu
ZSA8YnI+DQomZ3Q7IGhvc3QgdG8gbWFpbnRlbmFuY2UgbW9kZS4gRHVyaW5nIHRoZSBwcm9jZXNz
IG9mIHN1c3BlbmRpbmcgdGhlIFZNcyB0aGUgc2VydmVyIDxicj4NCiZndDsgY3Jhc2hlZCwga2Vy
bmVsIHBhbmlj4oCmPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEFmdGVyIHJlc3RhcnRpbmcgdGhlIHNl
cnZlciBJIGluc3RhbGxlZCB0aGUgdXBkYXRlcyB2aWEgeXVtIGFuIHJlc3RhcnRlZCB0aGUgPGJy
Pg0KJmd0OyBzZXJ2ZXIgYWdhaW4uIEFmdGVyd2FyZHMsIGFsbCB0aGUgVk1zIGNvdWxkIGJlIHN0
YXJ0ZWQgYWdhaW4uIFNvbWUgaG91cnMgbGF0ZXIgPGJyPg0KJmd0OyBteSBtb25pdG9yaW5nIHN5
c3RlbSByZWdpc3RlcmVkIHNvbWUgdW5yZXNwb25zaXZlIGhvc3RzLCBJIGhhZCBhIGxvb2sgaW4g
dGhlIDxicj4NCiZndDsgb1ZpcnQgaW50ZXJmYWNlLCAzIG9mIHRoZSBWTXMgd2VyZSBpbiB0aGUg
c3RhdGUg4oCcbm90IHJlc3BvbmRpbmfigJ0sIG1hcmtlZCBieSBhIDxicj4NCiZndDsgcXVlc3Rp
b24gbWFyay48YnI+DQomZ3Q7IDxicj4NCiZndDsgSSB0cmllZCB0byBzaHV0IGRvd24gdGhlIFZN
cywgYnV0IG9WaXJ0IHdhc27igJl0IGFibGUgdG8gZG8gc28uIEkgdHJpZWQgdG8gcmVzZXQgPGJy
Pg0KJmd0OyB0aGUgc3RhdHVzIGluIHRoZSBkYXRhYmFzZSB3aXRoIHRoZSBzcWwgc3RhdGVtZW50
PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IHVwZGF0ZSB2bV9keW5hbWljIHNldCBzdGF0dXMgPSAwIHdo
ZXJlIHZtX2d1aWQgPSAoc2VsZWN0IHZtX2d1aWQgZnJvbSB2bV9zdGF0aWMNCjxicj4NCiZndDsg
d2hlcmUgdm1fbmFtZSA9ICdNWVZNTkFNRScpOzxicj4NCiZndDsgPGJyPg0KJmd0OyBidXQgdGhh
dCBkaWRu4oCZdCBoZWxwLCBlaXRoZXIuIE9ubHkgcmVib290aW5nIHRoZSB3aG9sZSBoeXBlcnZp
c29yIGhlbHBlZOKApiA8YnI+DQomZ3Q7IGFmdGVyd2FyZHMgZXZlcnl0aGluZyB3b3JrZWQgYWdh
aW4uIEJ1dCBvbmx5IGZvciBhIGZldyBob3VycywgdGhlbiBvbmUgb2YgdGhlIDxicj4NCiZndDsg
Vk1zIGVudGVyZWQgdGhlIOKAnG5vdCByZXNwb25kaW5n4oCdIHN0YXRlIGFnYWlu4oCmIGFnYWlu
IG9ubHkgYSByZWJvb3QgaGVscGVkLiA8YnI+DQomZ3Q7IFllc3RlcmRheSBpdCBoYXBwZW5lZCBh
Z2Fpbjo8YnI+DQomZ3Q7IDxicj4NCiZndDsgMjAxNS0wOC0yOCAxNzo0NDoyMiw2NjQgSU5GTyZu
YnNwOyA8YnI+DQomZ3Q7IFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLlZkc1VwZGF0
ZVJ1blRpbWVJbmZvXSA8YnI+DQomZ3Q7IChEZWZhdWx0UXVhcnR6U2NoZWR1bGVyX1dvcmtlci02
MCkgWzRlZjkwYjEyXSBWTSBEQyA8YnI+DQomZ3Q7IDBmM2QxZjA2LWU1MTYtNDhjZS1hYTZmLTcy
NzNjMzNkMzQ5MSBtb3ZlZCBmcm9tIFVwIC0tJmd0OyBOb3RSZXNwb25kaW5nPGJyPg0KJmd0OyA8
YnI+DQomZ3Q7IDIwMTUtMDgtMjggMTc6NDQ6MjIsNjkyIFdBUk4mbmJzcDsgPGJyPg0KJmd0OyBb
b3JnLm92aXJ0LmVuZ2luZS5jb3JlLmRhbC5kYmJyb2tlci5hdWRpdGxvZ2hhbmRsaW5nLkF1ZGl0
TG9nRGlyZWN0b3JdIDxicj4NCiZndDsgKERlZmF1bHRRdWFydHpTY2hlZHVsZXJfV29ya2VyLTYw
KSBbNGVmOTBiMTJdIENvcnJlbGF0aW9uIElEOiBudWxsLCBDYWxsIFN0YWNrOg0KPGJyPg0KJmd0
OyBudWxsLCBDdXN0b20gRXZlbnQgSUQ6IC0xLCBNZXNzYWdlOiBWTSBEQyBpcyBub3QgcmVzcG9u
ZGluZy48YnI+DQomZ3Q7IDxicj4NCiZndDsgRG9lcyBhbnlib2R5IGtub3cgd2hhdCBJIGNhbiBk
bz8gV2hlcmUgc2hvdWxkIEkgaGF2ZSBhIGxvb2s/IEhpbnRzIGFyZSBncmVhdGx5IDxicj4NCiZn
dDsgYXBwcmVjaWF0ZWQhPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyw8YnI+DQomZ3Q7IDxi
cj4NCiZndDsgQ2hyaXN0aWFuPGJyPg0KJmd0OyA8YnI+DQo8YnI+DQotLSA8YnI+DQpEYW5pZWwg
SGVsZ2VuYmVyZ2VyPGJyPg0KbSBib3ggYmV3ZWd0YmlsZCBHbWJIPGJyPg0KPGJyPg0KUDogJiM0
Mzs0OS8zMC8yNDA4NzgxLTIyPGJyPg0KRjogJiM0Mzs0OS8zMC8yNDA4NzgxLTEwPGJyPg0KPGJy
Pg0KQUNLRVJTVFIuIDE5PGJyPg0KRC0xMDExNSBCRVJMSU48YnI+DQo8YnI+DQo8YnI+DQo8YSBo
cmVmPSJodHRwOi8vd3d3Lm0tYm94LmRlIj53d3cubS1ib3guZGU8L2E+Jm5ic3A7IDxhIGhyZWY9
Imh0dHA6Ly93d3cubW9ua2V5bWVuLnR2Ij4NCnd3dy5tb25rZXltZW4udHY8L2E+PGJyPg0KPGJy
Pg0KR2VzY2jDpGZ0c2bDvGhyZXI6IE1hcnRpbiBSZXRzY2hpdHplZ2dlciAvIE1pY2hhZWxhIEfD
tmxsbmVyPGJyPg0KSGFuZGVzbHJlZ2lzdGVyOiBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVyZyAv
IEhSQiAxMTI3Njc8YnI+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPC9kaXY+DQo8
L2JvZHk+DQo8L2h0bWw+DQo=
--_000_354c6d82d06343f59dfebdb779762820emailandroidcom_--
------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-842b7e16-7dd5-497a-b341-090e5ac35743--
3
7
Hi Yaniv,
I successfully tested the foreman integration with ovirt 3.5 for host
provisionning, but I'm now so much interested in the vm provisionning
during the vm creation with a PXE boot on a kickstart file generated by
foreman.
I've tested the vm provisionning in the foreman side with computer
resource and all works as expected. But to my mind, it would be easier
to deal this workflow with ovirt as a unique interface. Is there some
current work to implement this into ovirt?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2
1
--=_e307ea5a1316c74406ff13dc63f12383
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi,
I have 3 nodes with glusterfs installed. I configured replica
3 on my engine volume.
i don't want to use ctdb for my hosted-engine,
but when i start hosted-engine --deploy, use localhost:/engine or
gluster.localdomain:/engine, gluster.localdomain is configured in all
servers /etc/hosts like :
host 1 : "192.168.100.101 gluster.localdomain
gluster"
host 2 : "192.168.100.102 gluster.localdomain gluster"
host 3 :
"192.168.100.103 gluster.localdomain gluster",
the setup failed with :
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to
abort.
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO ] Please note that
Replica 3 support is required for the shared storage.
Please specify
the full shared storage connection path to use (example: host:/path):
localhost:/engine
[WARNING] Due to several bugs in mount.glusterfs the
validation of GlusterFS share cannot be reliable.
[ INFO ] GlusterFS
replica 3 Volume detected
[ ERROR ] Failed to execute stage 'Environment
customization': Connection to storage server failed
[ INFO ] Stage:
Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150925121036.conf'
[
INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
I have to
use obligatory ctdb or keepalived, or existing another way ?
--
Florent BELLO
Service Informatique
informatique(a)ville-kourou.fr
0594
22 31 22
Mairie de Kourou
--=_e307ea5a1316c74406ff13dc63f12383
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi,</p>
<p>I have 3 nodes with glusterfs installed. I configured replica 3 on my en=
gine volume.<br /><br />i don't want to use ctdb for my hosted-engine, but =
when i start hosted-engine --deploy, use localhost:/engine or gluster.local=
domain:/engine, gluster.localdomain is configured in all servers /etc/hosts=
like :<br />host 1 : "192.168.100.101 gluster.localdomain gluster"<br />ho=
st 2 : "192.168.100.102 gluster.localdomain gluster"<br />host 3 : "192.168=
=2E100.103 gluster.localdomain gluster",</p>
<p>the setup failed with :</p>
<p>--=3D=3D STORAGE CONFIGURATION =3D=3D--<br /><br /> &nb=
sp; During customization use CTRL-D to abort=
=2E<br /> Please spec=
ify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nf=
s3]: glusterfs<br />[ INFO ] Please note that Replica 3 support is re=
quired for the shared storage.<br /> &nb=
sp; Please specify the full shared storage connection path to u=
se (example: host:/path): localhost:/engine<br />[WARNING] Due to several b=
ugs in mount.glusterfs the validation of GlusterFS share cannot be reliable=
=2E<br />[ INFO ] GlusterFS replica 3 Volume detected<br />[ ERROR ] =
Failed to execute stage 'Environment customization': Connection to storage =
server failed<br />[ INFO ] Stage: Clean up<br />[ INFO ] Gener=
ating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150=
925121036.conf'<br />[ INFO ] Stage: Pre-termination<br />[ INFO =
; ] Stage: Termination</p>
<p>I have to use obligatory ctdb or keepalived, or existing another way ?</=
p>
<div>-- <br />
<p>Florent BELLO<br />Service Informatique<br />informatique@ville-kourou=
=2Efr<br />0594 22 31 22<br />Mairie de Kourou</p>
</div>
</body></html>
--=_e307ea5a1316c74406ff13dc63f12383--
2
3
Re: [ovirt-users] Add external provider (Failed with error PROVIDER_FAILURE and code 5050) foreman 1.9
by Daniel Helgenberger 29 Sep '15
by Daniel Helgenberger 29 Sep '15
29 Sep '15
Sorry for the delay; was working out of town.
On 25.09.2015 16:14, Nathanaël Blanchet wrote:
> Thanks for answering, so what am I supposed to do? regenerating foreman
> certificate?
This depends on your engine os; try one of those hints
http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certific…
>
> Le 25/09/2015 15:44, Daniel Helgenberger a écrit :
>> Just a guess: I had the same issue. It turned out foreman cert / ca was not trusted..
>>
>> On 25.09.2015 08:50, Nathanaël Blanchet wrote:
>>> Hello,
>>> I have a working foreman 1.9.1 installed with katello 2.3.
>>> As required, ruby193-rubygem-ovirt_provision_plugin-1.0.1-1.el7 is also
>>> installed on the same host.
>>> But the issue is the same as below when testing in "add external
>>> provider" from ovirt 3.5.4.
>>> Is it a known bug?
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>
--
Daniel Helgenberger
m box bewegtbild GmbH
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19
D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
1
0
This is a multi-part message in MIME format.
--------------090400050908040206050504
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Ovirt users,
I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs)
cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs:
readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
And nothing server side.
I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64
and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64
But since it is a production system, i'm not confident about performing
gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage
has been mounted succesfully).
I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.
If anyone have a hint to this problem
thanks
Jean-Michel
--------------090400050908040206050504
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-unicode">Hi Ovirt users, <br>
<br>
I'm running ovirt hosted 3.4 with gluster data storage. <br>
When I add a new host (Centos 6.6) the data storage (as a
glsuterfs) cannot be mount. <br>
I have the following errors in gluster client log file : <br>
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1 <br>
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv]
0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
<br>
[2015-09-24 12:27:22.637307] E
[rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS
Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344
(xid=0x1) <br>
[2015-09-24 12:27:22.637333] E
[glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:/data) <br>
[2015-09-24 12:27:22.637360] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e)
[0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2)
[0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65)
[0x4059b5] ) 0-: received signum (0), shutting down <br>
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. <br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down <br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down <br>
And nothing server side. <br>
<br>
I suppose it is a version issue since on server side I have <br>
glusterfs-api-3.6.3-1.el6.x86_64 <br>
glusterfs-fuse-3.6.3-1.el6.x86_64 <br>
glusterfs-libs-3.6.3-1.el6.x86_64 <br>
glusterfs-3.6.3-1.el6.x86_64 <br>
glusterfs-cli-3.6.3-1.el6.x86_64 <br>
glusterfs-rdma-3.6.3-1.el6.x86_64 <br>
glusterfs-server-3.6.3-1.el6.x86_64 <br>
<br>
and on the new host : <br>
glusterfs-3.7.4-2.el6.x86_64 <br>
glusterfs-api-3.7.4-2.el6.x86_64 <br>
glusterfs-libs-3.7.4-2.el6.x86_64 <br>
glusterfs-fuse-3.7.4-2.el6.x86_64 <br>
glusterfs-cli-3.7.4-2.el6.x86_64 <br>
glusterfs-server-3.7.4-2.el6.x86_64 <br>
glusterfs-client-xlators-3.7.4-2.el6.x86_64 <br>
glusterfs-rdma-3.7.4-2.el6.x86_64 <br>
<br>
But since it is a production system, i'm not confident about
performing gluster server upgrade. <br>
Mounting a gluster volume as NFS is possible (the engine data
storage has been mounted succesfully). <br>
<br>
I'm asking here because glusterfs comes from the ovirt3.4 rpm
repository. <br>
<br>
If anyone have a hint to this problem <br>
<br>
thanks <br>
Jean-Michel <br>
<br>
</div>
</body>
</html>
--------------090400050908040206050504--
3
7
This is a multi-part message in MIME format.
--------------070407060509090702050906
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Brett,
Can you truncate the gluster brick and mount logs on all three nodes,
try creating the storage domain again and then share these logs along
with the VDSM logs?
i.e. on all 3 nodes,
1. echo >
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
2. echo > export-vmstore-brick01.log
3. Create the storage domain (at which point VDSM supposedly fails with
the truncate error)
4. Share the logs.
Also, what timezone are you in? That would be needed to co-relate the
timestamps in the vdsm log (local time) and gluster log (UTC)
Thanks!
Ravi
>
> -------- Forwarded Message --------
> Subject: Re: [ovirt-users] adding gluster domains
> Date: Tue, 29 Sep 2015 08:38:49 +1000
> From: Brett Stevens <gorttman(a)i3sec.com.au>
> Reply-To: brett(a)i3sec.com.au
> To: Sahina Bose <sabose(a)redhat.com>
>
>
>
> Sorry about the delay, I've run the truncate. I'm not sure what
> results you were expecting, but it executed fine, no delays no errors
> no problems.
>
> thanks
> Brett Stevens
>
> On Thu, Sep 24, 2015 at 7:29 PM, Brett Stevens <gorttman(a)i3sec.com.au
> <mailto:gorttman@i3sec.com.au>> wrote:
>
> Thanks I'll do that tomorrow morning.
>
> Just out of interest, I keep getting warn errors in the engine.log
> allong the lines of node not present (sjcvhost02 which is the
> arbiter) and no gluster network present even after I have added
> the gluster network option in the network management gui.
>
> thanks
>
> Brett Stevens
>
>
> On Thu, Sep 24, 2015 at 7:26 PM, Sahina Bose <sabose(a)redhat.com>
> wrote:
>
> Sorry, I intended to forward it to a gluster devel.
>
> Btw, there were no errors in the mount log - so unable to root
> cause why truncate of file failed with IO error. Was the log
> from vhost03 -
> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
> ?
>
> We will look into the logs you attached to see if there are
> any errors reported at the bricks. (But there should have been
> some error in mount log!)
>
> Could you also try "truncate -s 10M test" from the mount point
> ( manually mount gluster using - #mount -t glusterfs
> sjcstorage01:/vmstore <mountpoint>) and report results.
>
> On 09/24/2015 02:32 PM, Brett Stevens wrote:
>> Hi Sahina.
>>
>> Something has gone wrong with your last email. I have
>> received a message from you, but did not get any text to go
>> with it. could you resend please?
>>
>> thanks
>>
>>
>> On Thu, Sep 24, 2015 at 6:48 PM, Sahina Bose
>> <sabose(a)redhat.com> wrote:
>>
>>
>>
>> On 09/24/2015 04:21 AM, Brett Stevens wrote:
>>> Hi Sahina.
>>>
>>> vhost02 is the engine node vhost03 is the hypervisor
>>> storage01 and 02 the gluster nodes. I've put arbiter on
>>> vhost02
>>>
>>> all tasks are separated (except engine and arbiter)
>>>
>>> thanks
>>>
>>>
>>> On Wed, Sep 23, 2015 at 9:48 PM, Sahina Bose
>>> <sabose(a)redhat.com> wrote:
>>>
>>> + ovirt-users
>>>
>>> Some clarity on your setup -
>>> sjcvhost03 - is this your arbiter node and ovirt
>>> management node? And are you running a compute +
>>> storage on the same nodes - i.e, sjcstorage01,
>>> sjcstorage02, sjcvhost03 (arbiter).
>>>
>>>
>>> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>>
>>> - fails with Error creating a storage domain's
>>> metadata: ("create meta file 'outbox' failed: [Errno
>>> 5] Input/output error",
>>>
>>> Are the vdsm logs you provided from sjcvhost03?
>>> There are no errors to be seen in the gluster log
>>> you provided. Could you provide mount log from
>>> sjcvhost03 (at
>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
>>> most likely)
>>> If possible, /var/log/glusterfs/* from the 3 storage
>>> nodes.
>>>
>>> thanks
>>> sahina
>>>
>>> On 09/23/2015 05:02 AM, Brett Stevens wrote:
>>>> Hi Sahina,
>>>>
>>>> as requested here is some logs taken during a
>>>> domain create.
>>>>
>>>> 2015-09-22 18:46:44,320 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-88) [] START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> sjcstorage01,
>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>> log id: 2205ff1
>>>>
>>>> 2015-09-22 18:46:44,413 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>>> associate brick
>>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,417 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>>> associate brick
>>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,417 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not add
>>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>>> not found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,418 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-88) [] FINISH,
>>>> GlusterVolumesListVDSCommand, return:
>>>> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
>>>> log id: 2205ff1
>>>>
>>>> 2015-09-22 18:46:45,215 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Lock Acquired to
>>>> object
>>>> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:45,230 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Running command:
>>>> AddStorageServerConnectionCommand internal: false.
>>>> Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:45,233 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-24) [5099cda3] START,
>>>> ConnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='null',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 6a112292
>>>>
>>>> 2015-09-22 18:46:48,065 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-24) [5099cda3] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {00000000-0000-0000-0000-000000000000=0}, log id:
>>>> 6a112292
>>>>
>>>> 2015-09-22 18:46:48,073 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Lock freed to object
>>>> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:48,188 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Running command:
>>>> AddGlusterFsStorageDomainCommand internal: false.
>>>> Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:48,206 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-23) [6410419] START,
>>>> ConnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 38a2b0d
>>>>
>>>> 2015-09-22 18:46:48,219 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-23) [6410419] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>>> 38a2b0d
>>>>
>>>> 2015-09-22 18:46:48,221 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] START,
>>>> CreateStorageDomainVDSCommand(HostName =
>>>> sjcvhost03,
>>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>>>
>>>> 2015-09-22 18:46:48,744 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-23) [6410419] Correlation ID: null,
>>>> Call Stack: null, Custom Event ID: -1, Message:
>>>> VDSM sjcvhost03 command failed: Error creating a
>>>> storage domain's metadata: ("create meta file
>>>> 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> 2015-09-22 18:46:48,744 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] Command
>>>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
>>>> return value 'StatusOnlyReturnForXmlRpc
>>>> [status=StatusForXmlRpc [code=362, message=Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output
>>>> error",)]]'
>>>>
>>>> 2015-09-22 18:46:48,744 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] HostName = sjcvhost03
>>>>
>>>> 2015-09-22 18:46:48,745 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] Command
>>>> 'CreateStorageDomainVDSCommand(HostName =
>>>> sjcvhost03,
>>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>>> args='sjcstorage01:/vmstore'})' execution failed:
>>>> VDSGenericException: VDSErrorException: Failed in
>>>> vdscommand to CreateStorageDomainVDS, error = Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> 2015-09-22 18:46:48,745 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] FINISH,
>>>> CreateStorageDomainVDSCommand, log id: b9fe587
>>>>
>>>> 2015-09-22 18:46:48,745 ERROR
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
>>>> failed: EngineException:
>>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>>>> VDSGenericException: VDSErrorException: Failed in
>>>> vdscommand to CreateStorageDomainVDS, error = Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output
>>>> error",) (Failed with error
>>>> StorageDomainMetadataCreationError and code 362)
>>>>
>>>> 2015-09-22 18:46:48,755 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>>> Compensating NEW_ENTITY_ID of
>>>> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
>>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>>
>>>> 2015-09-22 18:46:48,758 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>>> Compensating NEW_ENTITY_ID of
>>>> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
>>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>>
>>>> 2015-09-22 18:46:48,769 ERROR
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Transaction rolled-back
>>>> for command
>>>> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>>>>
>>>> 2015-09-22 18:46:48,784 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-23) [6410419] Correlation ID:
>>>> 6410419, Job ID:
>>>> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
>>>> null, Custom Event ID: -1, Message: Failed to add
>>>> Storage Domain sjcvmstore. (User: admin@internal)
>>>>
>>>> 2015-09-22 18:46:48,996 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Lock Acquired to
>>>> object
>>>> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:49,018 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Running command:
>>>> RemoveStorageServerConnectionCommand internal:
>>>> false. Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:49,024 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Removing connection
>>>> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>>>>
>>>> 2015-09-22 18:46:49,026 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>>> (default task-32) [1635a244] START,
>>>> DisconnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 39d3b568
>>>>
>>>> 2015-09-22 18:46:49,248 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>>> (default task-32) [1635a244] FINISH,
>>>> DisconnectStorageServerVDSCommand, return:
>>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>>> 39d3b568
>>>>
>>>> 2015-09-22 18:46:49,252 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Lock freed to object
>>>> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:49,431 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-3) [] START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> sjcstorage01,
>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>> log id: 17014ae8
>>>>
>>>> 2015-09-22 18:46:49,511 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>>> associate brick
>>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,515 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>>> associate brick
>>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,516 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not add
>>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>>> not found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,516 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-3) [] FINISH,
>>>> GlusterVolumesListVDSCommand, return:
>>>> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
>>>> log id: 17014ae8
>>>>
>>>>
>>>>
>>>> ovirt engine thinks that sjcstorage01 is
>>>> sjcstorage01, its all testbed at the moment and is
>>>> all short names, defined in /etc/hosts (all copied
>>>> to each server for consistancy)
>>>>
>>>>
>>>> volume info for vmstore is
>>>>
>>>>
>>>> Status of volume: vmstore
>>>>
>>>> Gluster process TCP Port RDMA Port Online Pid
>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>> Brick sjcstorage01:/export/vmstore/brick01 49157
>>>> 0 Y 7444
>>>>
>>>> Brick sjcstorage02:/export/vmstore/brick01 49157
>>>> 0 Y 4063
>>>>
>>>> Brick sjcvhost02:/export/vmstore/brick01 49156
>>>> 0 Y 3243
>>>>
>>>> NFS Server on localhost 2049 0 Y
>>>> 3268
>>>>
>>>> Self-heal Daemon on localhost N/A
>>>> N/A Y 3284
>>>>
>>>> NFS Server on sjcstorage01 2049 0
>>>> Y 7463
>>>>
>>>> Self-heal Daemon on sjcstorage01 N/A N/A
>>>> Y 7472
>>>>
>>>> NFS Server on sjcstorage02 2049 0
>>>> Y 4082
>>>>
>>>> Self-heal Daemon on sjcstorage02 N/A N/A
>>>> Y 4090
>>>>
>>>> Task Status of Volume vmstore
>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>> There are no active volume tasks
>>>>
>>>>
>>>>
>>>> vdsm logs from time the domain is added
>>>>
>>>>
>>>> hread-789::DEBUG::2015-09-22
>>>> 19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-790::INFO::2015-09-22
>>>> 19:12:07,797::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-790::INFO::2015-09-22
>>>> 19:12:07,797::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
>>>> {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0
>>>> aborting False
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52510)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Thread-791::INFO::2015-09-22
>>>> 19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510> started
>>>>
>>>> Thread-791::INFO::2015-09-22
>>>> 19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510> stopped
>>>>
>>>> Thread-792::DEBUG::2015-09-22
>>>> 19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-793::INFO::2015-09-22
>>>> 19:12:22,832::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-793::INFO::2015-09-22
>>>> 19:12:22,832::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
>>>> {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0
>>>> aborting False
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52511)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Thread-794::INFO::2015-09-22
>>>> 19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511> started
>>>>
>>>> Thread-794::INFO::2015-09-22
>>>> 19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511> stopped
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.connectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-795::INFO::2015-09-22
>>>> 19:12:35,521::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
>>>> Creating directory:
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> mode: None
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
>>>> /usr/bin/sudo -n /usr/bin/systemd-run --scope
>>>> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs
>>>> -o backup-volfile-servers=sjcstorage02:sjcvhost02
>>>> sjcstorage01:/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> (cwd None)
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
>>>> glusterDomPath: glusterSD/*
>>>>
>>>> Thread-796::DEBUG::2015-09-22
>>>> 19:12:35,707::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-797::DEBUG::2015-09-22
>>>> 19:12:35,712::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
>>>> Found SD uuids: ()
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
>>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>>> storage.glusterSD.findDomain,
>>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>>> storage.glusterSD.findDomain,
>>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>>> storage.glusterSD.findDomain}
>>>>
>>>> Thread-795::INFO::2015-09-22
>>>> 19:12:35,721::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0
>>>> aborting False
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.connectStorageServer' in bridge
>>>> with [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.connectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-798::INFO::2015-09-22
>>>> 19:12:35,776::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
>>>> glusterDomPath: glusterSD/*
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
>>>> Found SD uuids: ()
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
>>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>>> storage.glusterSD.findDomain,
>>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>>> storage.glusterSD.findDomain,
>>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>>> storage.glusterSD.findDomain}
>>>>
>>>> Thread-798::INFO::2015-09-22
>>>> 19:12:35,782::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0
>>>> aborting False
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.connectStorageServer' in bridge
>>>> with [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StorageDomain.create' in bridge with
>>>> {u'name': u'sjcvmstore01', u'domainType': 7,
>>>> u'domainClass': 1, u'typeArgs':
>>>> u'sjcstorage01:/vmstore', u'version': u'3',
>>>> u'storagedomainID':
>>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:35,788::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: createStorageDomain(storageType=7,
>>>> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>>> domainName=u'sjcvmstore01',
>>>> typeSpecificArg=u'sjcstorage01:/vmstore',
>>>> domClass=1, domVersion=u'3', options=None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method
>>>> (storage.sdc.refreshStorage)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
>>>> Performing SCSI scan, this will take up to 30 seconds
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>>> None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.hba.rescan)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::hba::56::Storage.HBA::(rescan)
>>>> Starting scan
>>>>
>>>> Thread-802::DEBUG::2015-09-22
>>>> 19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan
>>>> finished
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> SUCCESS: <err> = ''; <rc> = 0
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,936::utils::661::root::(execCmd)
>>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,946::utils::679::root::(execCmd) SUCCESS:
>>>> <err> = ''; <rc> = 0
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
>>>> looking for unfetched domain
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
>>>> Operation 'lvm reload operation' got the operation
>>>> mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
>>>> /usr/bin/sudo -n /usr/sbin/lvm vgs --config '
>>>> devices { preferred_names = ["^/dev/mapper/"]
>>>> ignore_suspended_devices=1 write_cache_state=0
>>>> disable_after_error_count=3
>>>> obtain_device_list_from_udev=0 filter = [
>>>> '\''r|.*|'\'' ] } global { locking_type=1
>>>> prioritise_write_locks=1 wait_for_locks=1
>>>> use_lvmetad=0 } backup { retain_min = 50
>>>> retain_days = 0 } ' --noheadings --units b
>>>> --nosuffix --separator '|' --ignoreskippedcluster
>>>> -o
>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
>>>> FAILED: <err> = ' WARNING: lvmetad is running but
>>>> disabled. Restart lvmetad before enabling it!\n
>>>> Volume group
>>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n
>>>> Cannot process volume group
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>>>>
>>>> Thread-801::WARNING::2015-09-22
>>>> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
>>>> lvm vgs failed: 5 [] [' WARNING: lvmetad is
>>>> running but disabled. Restart lvmetad before
>>>> enabling it!', ' Volume group
>>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found',
>>>> ' Cannot process volume group
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
>>>> Operation 'lvm reload operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
>>>> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>>>>
>>>> Traceback (most recent call last):
>>>>
>>>> File "/usr/share/vdsm/storage/sdc.py", line 142,
>>>> in _findDomain
>>>>
>>>> dom = findMethod(sdUUID)
>>>>
>>>> File "/usr/share/vdsm/storage/sdc.py", line 172,
>>>> in _findUnfetchedDomain
>>>>
>>>> raise se.StorageDomainDoesNotExist(sdUUID)
>>>>
>>>> StorageDomainDoesNotExist: Storage domain does not
>>>> exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
>>>> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>> domainName=sjcvmstore01
>>>> remotePath=sjcstorage01:/vmstore domClass=1
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,015::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
>>>> error
>>>>
>>>> Traceback (most recent call last):
>>>>
>>>> File "/usr/share/vdsm/storage/task.py", line 873,
>>>> in _run
>>>>
>>>> return fn(*args, **kargs)
>>>>
>>>> File "/usr/share/vdsm/logUtils.py", line 49, in
>>>> wrapper
>>>>
>>>> res = f(*args, **kwargs)
>>>>
>>>> File "/usr/share/vdsm/storage/hsm.py", line 2697,
>>>> in createStorageDomain
>>>>
>>>> domVersion)
>>>>
>>>> File "/usr/share/vdsm/storage/nfsSD.py", line 84,
>>>> in create
>>>>
>>>> remotePath, storageType, version)
>>>>
>>>> File "/usr/share/vdsm/storage/fileSD.py", line
>>>> 264, in _prepareMetadata
>>>>
>>>> "create meta file '%s' failed: %s" % (metaFile,
>>>> str(e)))
>>>>
>>>> StorageDomainMetadataCreationError: Error creating
>>>> a storage domain's metadata: ("create meta file
>>>> 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
>>>> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
>>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>>> u'sjcvmstore01', u'sjcstorage01:/vmstore', 1, u'3')
>>>> {} failed - stopping task
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
>>>> in state preparing (force False)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1
>>>> aborting True
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
>>>> Task is aborted: "Error creating a storage domain's
>>>> metadata" - code 362
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
>>>> aborted: Error creating a storage domain's metadata
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0
>>>> aborting True
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
>>>> force False
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state preparing -> state aborting
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
>>>> recover policy none
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state aborting -> state failed
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
>>>> {'status': {'message': 'Error creating a storage
>>>> domain\'s metadata: ("create meta file \'outbox\'
>>>> failed: [Errno 5] Input/output error",)', 'code': 362}}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.disconnectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-807::INFO::2015-09-22
>>>> 19:12:36,182::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: disconnectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
>>>> /usr/bin/sudo -n /usr/bin/umount -f -l
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method
>>>> (storage.sdc.refreshStorage)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
>>>> Performing SCSI scan, this will take up to 30 seconds
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>>> None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.hba.rescan)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::hba::56::Storage.HBA::(rescan)
>>>> Starting scan
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan
>>>> finished
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> SUCCESS: <err> = ''; <rc> = 0
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,374::utils::661::root::(execCmd)
>>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,383::utils::679::root::(execCmd) SUCCESS:
>>>> <err> = ''; <rc> = 0
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::INFO::2015-09-22
>>>> 19:12:36,386::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: disconnectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0
>>>> aborting False
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.disconnectStorageServer' in
>>>> bridge with [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-808::INFO::2015-09-22
>>>> 19:12:37,868::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-808::INFO::2015-09-22
>>>> 19:12:37,868::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
>>>> {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0
>>>> aborting False
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52512)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Thread-809::INFO::2015-09-22
>>>> 19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512> started
>>>>
>>>> Thread-809::INFO::2015-09-22
>>>> 19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512> stopped
>>>>
>>>> Thread-810::DEBUG::2015-09-22
>>>> 19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-811::INFO::2015-09-22
>>>> 19:12:52,902::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-811::INFO::2015-09-22
>>>> 19:12:52,902::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
>>>> {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0
>>>> aborting False
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52513)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Thread-812::INFO::2015-09-22
>>>> 19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513> started
>>>>
>>>> Thread-812::INFO::2015-09-22
>>>> 19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513> stopped
>>>>
>>>> Thread-813::DEBUG::2015-09-22
>>>> 19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-814::INFO::2015-09-22
>>>> 19:13:07,935::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-814::INFO::2015-09-22
>>>> 19:13:07,935::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
>>>> {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0
>>>> aborting False
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52515)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Thread-815::INFO::2015-09-22
>>>> 19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515> started
>>>>
>>>> Thread-815::INFO::2015-09-22
>>>> 19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515> stopped
>>>>
>>>> Thread-816::DEBUG::2015-09-22
>>>> 19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>>
>>>>
>>>> gluster logs
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:29:07.586205] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.586325] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.586480] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume :
>>>>
>>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.608758] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:29:07.609868] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
>>>> [dht-layout.c:702:dht_layout_normalize]
>>>> 0-vmstore-dht: Found anomalies in / (gfid =
>>>> 00000000-0000-0000-0000-000000000001). Holes=1
>>>> overlaps=0
>>>>
>>>> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of / with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:29:08.122415] W
>>>> [fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse:
>>>> 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No
>>>> data available)
>>>>
>>>> [2015-09-22 05:29:08.137359 <tel:137359>] I [MSGID:
>>>> 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of
>>>> /061b73d5-ae59-462e-b674-ea9c60d436c2 with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of
>>>> /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:30:57.923087] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:30:57.925701] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:30:57.927984] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 51: subvolumes vmstore-read-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:30:57.934021] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.934145] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.934491] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.953887] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.955282] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:30:57.970232] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:30:57.970834] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f187139fdf5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f1872a09785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f1872a09609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:30:57.970848] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:30:58.420973] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:30:58.421355] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f8267cd4df5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f826933e785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f826933e609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:30:58.421369] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:31:09.553019] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:09.555552] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:09.557989] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 51: subvolumes vmstore-read-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:31:09.563262] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.563431] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.563877] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-1'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopen:
>>>>
>>>> [2015-09-22 05:31:09.577192] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.578125] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:31:10.073698] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:31:10.073977] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f6b9ba88df5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f6b9d0f2785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f6b9d0f2609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:31:10.073993] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:31:20.200701] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:20.203110] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:20.205708] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>>
>>>>
>>>> Hope this helps.
>>>>
>>>>
>>>> thanks again
>>>>
>>>>
>>>> Brett Stevens
>>>>
>>>>
>>>>
>>>> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose
>>>> <sabose(a)redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>>>>> Hi. First time on the lists. I've searched for
>>>>> this but no luck so sorry if this has been
>>>>> covered before.
>>>>>
>>>>> Im working with the latest 3.6 beta with the
>>>>> following infrastructure.
>>>>>
>>>>> 1 management host (to be used for a number of
>>>>> tasks so chose not to use self hosted, we are
>>>>> a school and will need to keep an eye on
>>>>> hardware costs)
>>>>> 2 compute nodes
>>>>> 2 gluster nodes
>>>>>
>>>>> so far built one gluster volume using the
>>>>> gluster cli to give me 2 nodes and one arbiter
>>>>> node (management host)
>>>>>
>>>>> so far, every time I create a volume, it shows
>>>>> up strait away on the ovirt gui. however no
>>>>> matter what I try, I cannot create or import
>>>>> it as a data domain.
>>>>>
>>>>> the current error in the ovirt gui is "Error
>>>>> while executing action
>>>>> AddGlusterFsStorageDomain: Error creating a
>>>>> storage domain's metadata"
>>>>
>>>> Please provide vdsm and gluster logs
>>>>
>>>>>
>>>>> logs, continuously rolling the following
>>>>> errors around
>>>>>
>>>>> Scheduler_Worker-53) [] START,
>>>>> GlusterVolumesListVDSCommand(HostName =
>>>>> sjcstorage02,
>>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>>> log id: 24198fbf
>>>>>
>>>>> 2015-09-22 03:57:29,903 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not associate brick
>>>>> 'sjcstorage01:/export/vmstore/brick01' of
>>>>> volume '878a316d-2394-4aae-bdf8-e10eea38225e'
>>>>> with correct network as no gluster network
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>
>>>> What is the hostname provided in ovirt engine
>>>> for sjcstorage01 ? Does this host have multiple
>>>> nics?
>>>>
>>>> Could you provide output of gluster volume info?
>>>> Please note, that these errors are not related
>>>> to error in creating storage domain. However,
>>>> these errors could prevent you from monitoring
>>>> the state of gluster volume from oVirt
>>>>
>>>>> 2015-09-22 03:57:29,905 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not associate brick
>>>>> 'sjcstorage02:/export/vmstore/brick01' of
>>>>> volume '878a316d-2394-4aae-bdf8-e10eea38225e'
>>>>> with correct network as no gluster network
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>> 2015-09-22 03:57:29,905 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not add brick
>>>>> 'sjcvhost02:/export/vmstore/brick01' to volume
>>>>> '878a316d-2394-4aae-bdf8-e10eea38225e' -
>>>>> server uuid
>>>>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>> 2015-09-22 03:57:29,905 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>>>>> GlusterVolumesListVDSCommand, return:
>>>>> {878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>>>>> log id: 24198fbf
>>>>>
>>>>>
>>>>> I'm new to ovirt and gluster, so any help
>>>>> would be great
>>>>>
>>>>>
>>>>> thanks
>>>>>
>>>>>
>>>>> Brett Stevens
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>>
>>
>>
>
>
>
>
>
--------------070407060509090702050906
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
Hi Brett,<br>
Can you truncate the gluster brick and mount logs on all three
nodes, try creating the storage domain again and then share these
logs along with the VDSM logs? <br>
<br>
i.e. on all 3 nodes, <br>
1. echo > /var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log</span><br>
2. echo > export-vmstore-brick01.log<br>
3. Create the storage domain (at which point VDSM supposedly fails
with the truncate error)<br>
4. Share the logs.<br>
<br>
Also, what timezone are you in? That would be needed to co-relate
the timestamps in the vdsm log (local time) and gluster log (UTC)<br>
<br>
Thanks!<br>
Ravi<br>
<blockquote cite="mid:560A221C.5030204@redhat.com" type="cite">
<div class="moz-forward-container"> <br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] adding gluster domains</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date:
</th>
<td>Tue, 29 Sep 2015 08:38:49 +1000</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From:
</th>
<td>Brett Stevens <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:gorttman@i3sec.com.au"><gorttman(a)i3sec.com.au></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Reply-To:
</th>
<td><a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:brett@i3sec.com.au">brett(a)i3sec.com.au</a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td>Sahina Bose <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:sabose@redhat.com"><sabose(a)redhat.com></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Sorry about the delay, I've run the truncate. I'm
not sure what results you were expecting, but it executed
fine, no delays no errors no problems.
<div><br>
</div>
<div>thanks</div>
<div>Brett Stevens</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:29 PM,
Brett Stevens <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:gorttman@i3sec.com.au" target="_blank">gorttman(a)i3sec.com.au</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Thanks I'll do that tomorrow morning.
<div><br>
</div>
<div>Just out of interest, I keep getting warn errors in
the engine.log allong the lines of node not present
(sjcvhost02 which is the arbiter) and no gluster
network present even after I have added the gluster
network option in the network management gui.</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
<div>Brett Stevens</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:26
PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Sorry, I
intended to forward it to a gluster devel.<br>
<br>
Btw, there were no errors in the mount log - so
unable to root cause why truncate of file failed
with IO error. Was the log from vhost03 -
/var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
?<br>
<br>
We will look into the logs you attached to see
if there are any errors reported at the bricks.
(But there should have been some error in mount
log!)<br>
<br>
Could you also try "truncate -s 10M test" from
the mount point ( manually mount gluster using -
#mount -t glusterfs </span><span><span>sjcstorage01:/vmstore
<mountpoint>) and report results.</span><br>
</span><br>
<div>On 09/24/2015 02:32 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>Something has gone wrong with your last
email. I have received a message from you,
but did not get any text to go with it.
could you resend please?</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015
at 6:48 PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> <br>
<br>
<div>On 09/24/2015 04:21 AM, Brett
Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>vhost02 is the engine node
vhost03 is the hypervisor
storage01 and 02 the gluster
nodes. I've put arbiter on vhost02</div>
<div><br>
</div>
<div>all tasks are separated (except
engine and arbiter) </div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Sep
23, 2015 at 9:48 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span> wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> +
ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span>sjcvhost03 - is this
your arbiter node and ovirt
management node? And are you
running a compute + storage
on the same nodes - i.e, </span><span>sjcstorage01,
</span><span>sjcstorage02, </span><span>sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span>
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id: b9fe587<br>
<br>
- fails with </span><span>Error
creating a storage domain's
metadata: ("create meta file
'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you
provided from </span><span>sjcvhost03?
There are no errors to be
seen in the gluster log you
provided. Could you provide
mount log from </span><span><span>sjcvhost03</span>
(at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
most likely)<br>
If possible,
/var/log/glusterfs/* from
the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div>On 09/23/2015 05:02 AM,
Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is
some logs taken during a
domain create.</div>
<div><br>
</div>
<div>
<p><span>2015-09-22
18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 2205ff1</span></p>
<p><span>2015-09-22
18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p><span>2015-09-22
18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Running
command:
AddStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 6a112292</span></p>
<p><span>2015-09-22
18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] FINISH,
ConnectStorageServerVDSCommand,
return:
{00000000-0000-0000-0000-000000000000=0},
log id: 6a112292</span></p>
<p><span>2015-09-22
18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Running
command:
AddGlusterFsStorageDomainCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] FINISH,
ConnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id: 38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] START,
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id: b9fe587</span></p>
<p><span>2015-09-22
18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
null, Call Stack:
null, Custom Event
ID: -1, Message:
VDSM sjcvhost03
command failed:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value
'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc
[code=362,
message=Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)]]'</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] HostName =
sjcvhost03</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})'
execution failed:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] FINISH,
CreateStorageDomainVDSCommand,
log id: b9fe587</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",) (Failed
with error
StorageDomainMetadataCreationError
and code 362)</span></p>
<p><span>2015-09-22
18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419]
Transaction
rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p><span>2015-09-22
18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
6410419, Job ID:
78692780-a06f-49a5-b6b1-e6c24a820d62,
Call Stack: null,
Custom Event ID: -1,
Message: Failed to
add Storage Domain
sjcvmstore. (User:
admin@internal)</span></p>
<p><span>2015-09-22
18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Running
command:
RemoveStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Removing
connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e'
from database </span></p>
<p><span>2015-09-22
18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] START,
DisconnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 39d3b568</span></p>
<p><span>2015-09-22
18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] FINISH,
DisconnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id: 39d3b568</span></p>
<p><span>2015-09-22
18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 17014ae8</span></p>
<p><span>2015-09-22
18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>ovirt engine
thinks that
sjcstorage01 is
sjcstorage01, its
all testbed at the
moment and is all
short names, defined
in /etc/hosts (all
copied to each
server for
consistancy)</span></p>
<p><span><br>
</span></p>
<p><span>volume info for
vmstore is</span></p>
<p><span><br>
</span></p>
<p><span>Status of
volume: vmstore</span></p>
<p><span>Gluster process
TCP Port
RDMA Port Online
Pid</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p><span>Brick
sjcstorage01:/export/vmstore/brick01
49157 0
Y 7444 </span></p>
<p><span>Brick
sjcstorage02:/export/vmstore/brick01
49157 0
Y 4063 </span></p>
<p><span>Brick
sjcvhost02:/export/vmstore/brick01
49156 0
Y 3243 </span></p>
<p><span>NFS Server on
localhost
2049
0 Y
3268 </span></p>
<p><span>Self-heal
Daemon on localhost
N/A
N/A Y
3284 </span></p>
<p><span>NFS Server on
sjcstorage01
2049
0 Y
7463 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage01
N/A N/A
Y 7472 </span></p>
<p><span>NFS Server on
sjcstorage02
2049
0 Y
4082 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage02
N/A N/A
Y 4090 </span></p>
<p><span> </span></p>
<p><span>Task Status of
Volume vmstore</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p> </p>
<p><span>There are no
active volume tasks</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>vdsm logs from
time the domain is
added</span></p>
<p><span><br>
</span></p>
<p>hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state init ->
state preparing</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state preparing
-> state finished</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref
0 aborting False</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52510)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a>
started</p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a>
stopped</p>
<p>Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state init ->
state preparing</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state preparing
-> state finished</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref
0 aborting False</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52511)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a>
started</p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a>
stopped</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state init ->
state preparing</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,521::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
mode: None</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/systemd-run
--scope
--slice=vdsm-glusterfs
/usr/bin/mount -t
glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,721::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state preparing
-> state finished</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref
0 aborting False</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state init ->
state preparing</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,776::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,782::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state preparing
-> state finished</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref
0 aborting False</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StorageDomain.create'
in bridge with
{u'name':
u'sjcvmstore01',
u'domainType': 7,
u'domainClass': 1,
u'typeArgs':
u'sjcstorage01:/vmstore',
u'version': u'3',
u'storagedomainID':
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state init ->
state preparing</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,788::logUtils::48::dispatcher::(wrapper)
Run and protect:
createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore',
domClass=1,
domVersion=u'3',
options=None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' got the
operation mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n
/usr/sbin/lvm vgs
--config ' devices {
preferred_names =
["^/dev/mapper/"]
ignore_suspended_devices=1
write_cache_state=0
disable_after_error_count=3
obtain_device_list_from_udev=0
filter = [
'\''r|.*|'\'' ] }
global {
locking_type=1
prioritise_write_locks=1
wait_for_locks=1
use_lvmetad=0 }
backup { retain_min
= 50 retain_days = 0
} ' --noheadings
--units b --nosuffix
--separator '|'
--ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
FAILED: <err> =
' WARNING: lvmetad is
running but disabled.
Restart lvmetad before
enabling it!\n Volume
group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n';
<rc> = 5</p>
<p>Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
lvm vgs failed: 5 []
[' WARNING: lvmetad
is running but
disabled. Restart
lvmetad before
enabling it!', '
Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found', ' Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' released
the operation mutex</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
not found</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 142, in
_findDomain</p>
<p> dom =
findMethod(sdUUID)</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 172, in
_findUnfetchedDomain</p>
<p> raise
se.StorageDomainDoesNotExist(sdUUID)</p>
<p>StorageDomainDoesNotExist:
Storage domain does
not exist:
(u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01
remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/task.py",
line 873, in _run</p>
<p> return fn(*args,
**kargs)</p>
<p> File
"/usr/share/vdsm/logUtils.py",
line 49, in wrapper</p>
<p> res = f(*args,
**kwargs)</p>
<p> File
"/usr/share/vdsm/storage/hsm.py",
line 2697, in
createStorageDomain</p>
<p> domVersion)</p>
<p> File
"/usr/share/vdsm/storage/nfsSD.py",
line 84, in create</p>
<p> remotePath,
storageType, version)</p>
<p> File
"/usr/share/vdsm/storage/fileSD.py",
line 264, in
_prepareMetadata</p>
<p> "create meta file
'%s' failed: %s" %
(metaFile, str(e)))</p>
<p>StorageDomainMetadataCreationError:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output error",)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb
(7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
u'sjcvmstore01',
u'sjcstorage01:/vmstore',
1, u'3') {} failed -
stopping task</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
in state preparing
(force False)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
1 aborting True</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
Task is aborted:
"Error creating a
storage domain's
metadata" - code 362</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error
creating a storage
domain's metadata</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
0 aborting True</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state preparing
-> state aborting</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state aborting
-> state failed</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message':
'Error creating a
storage domain\'s
metadata: ("create
meta file \'outbox\'
failed: [Errno 5]
Input/output
error",)', 'code':
362}}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.disconnectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state init ->
state preparing</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,182::logUtils::48::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,386::logUtils::51::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state preparing
-> state finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref
0 aborting False</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.disconnectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state init ->
state preparing</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state preparing
-> state finished</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref
0 aborting False</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52512)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a>
started</p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a>
stopped</p>
<p>Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state init ->
state preparing</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state preparing
-> state finished</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref
0 aborting False</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52513)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a>
started</p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a>
stopped</p>
<p>Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state init ->
state preparing</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state preparing
-> state finished</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref
0 aborting False</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52515)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a>
started</p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a>
stopped</p>
<p><span></span></p>
<p>Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span>52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.596044] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span> </span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume :</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:29:07.616577] I
[MSGID: 109063]
[dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found
anomalies in / (gfid
=
00000000-0000-0000-0000-000000000001).
Holes=1 overlaps=0</span></p>
<p><span>[2015-09-22
05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of /
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk]
0-glusterfs-fuse:
26: REMOVEXATTR()
/__DIRECT_IO_TEST__
=> -1 (No data
available)</span></p>
<p><span>[2015-09-22
05:29:08.<a
moz-do-not-send="true"
href="tel:137359"
value="+61137359"
target="_blank">137359</a>]
I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:30:57.897819] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:30:57.923087] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.925701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.927984] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> </span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p><span> 52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> </span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.943036] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:30:57.970848] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:30:58.421369] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:09.534410] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:31:09.553019] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.555552] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:09.557989] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p> </p>
<p><span> 52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:31:09.573395] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-1'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopen:</span></p>
<p><span>[2015-09-22
05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:31:10.073993] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:20.184700] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:31:20.200701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.203110] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.205708] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span> </span></p>
<p><span>Final graph:</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>Hope this
helps. </span></p>
<p><span><br>
</span></p>
<p>thanks again</p>
<p><br>
</p>
<p>Brett Stevens</p>
<p><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Tue, Sep 22, 2015 at
10:14 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><span>
<br>
<br>
<div>On 09/22/2015
02:17 PM, Brett
Stevens wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Hi.
First time on
the lists.
I've searched
for this but
no luck so
sorry if this
has been
covered
before.
<div><br>
</div>
<div>Im
working with
the latest 3.6
beta with the
following
infrastructure. </div>
<div><br>
</div>
<div>1
management
host (to be
used for a
number of
tasks so chose
not to use
self hosted,
we are a
school and
will need to
keep an eye on
hardware
costs)</div>
<div>2 compute
nodes</div>
<div>2 gluster
nodes</div>
<div><br>
</div>
<div>so far
built one
gluster volume
using the
gluster cli to
give me 2
nodes and one
arbiter node
(management
host)</div>
<div><br>
</div>
<div>so far,
every time I
create a
volume, it
shows up
strait away on
the ovirt gui.
however no
matter what I
try, I cannot
create or
import it as a
data domain. </div>
<div><br>
</div>
<div>the
current error
in the ovirt
gui is "Error
while
executing
action
AddGlusterFsStorageDomain:
Error creating
a storage
domain's
metadata"</div>
</div>
</blockquote>
<br>
</span> Please
provide vdsm and
gluster logs<span><br>
<br>
<blockquote
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs,
continuously
rolling the
following
errors around</div>
<div>
<p><span>Scheduler_Worker-53)
[] START,
GlusterVolumesListVDSCommand(HostName
=
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id:
24198fbf</span></p>
<p><span>2015-09-22
03:57:29,903
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the
hostname provided in
ovirt engine for <span>sjcstorage01
? Does this host
have multiple
nics?<br>
<br>
Could you provide
output of gluster
volume info?<br>
Please note, that
these errors are
not related to
error in creating
storage domain.
However, these
errors could
prevent you from
monitoring the
state of gluster
volume from oVirt<br>
<br>
</span>
<blockquote
type="cite"><span>
<div dir="ltr">
<div>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
add brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id:
24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm
new to ovirt
and gluster,
so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett
Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------070407060509090702050906--
1
0
I am using oVirt 3.5 and have configured NAT by extnet hooks.
I'm suffering this situation:
vm: 192.168.122.160
host: 192.168.0.120
gateway: 192.168.0.1
vm can reach outside network but cannot reach host.
I've diff an original libvirt vm and a ovirt based vm but I cannot see any
obvious difference.
Please help me out of this...
Thank you very much.
This is the NAT configuration.
<network>
<name>default</name>
<uuid>ea0eb0cf-b507-451c-9f0d-919675ea7d8a</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:c1:18:e4'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
1
0
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
I'm getting hundreds of email messages from one of my hosts, several per
minute, with a subject of "*** SECURITY information for ov1.example ***
vdsm : problem with defaults entries ; TTY=3Dunknown ; PWD=3D/ ;
Any ideas on how I can fix this?
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJWCVQ1AAoJEMHFVuy5l8Y49GoP/A2WA0irXqa0LltYABpLbVm7
PKI9gI55f0S2q4hxbZn6DaYM6RGV7vKBqnDxwDhIDZ7BYu1duyMKWUEm8TdWrcGb
CXO93AFiedNUWGuocOguTw5QdIqq838k3ZZo4BHox6egNTUVJ1PimzQivNIurGQw
4eCMzbPp7hR4tgtazx/sZUI/F9diPz9yy1qhpCN6idAYGLnBdLrkm2a97Hb3tYRx
k6CMHXt8EGhhrVNLrpHMqVieVQWxmmTi1iqNoKS9HgiQULtzpRZ2QCkSPuMUvEVW
CRCsPMTjMvcRUVLmrzW5PaOcpEfPLTucxuM9Iam1b9FRbbaU2UorJZOhyz7KK51u
O+8WfuhdKdDHjP0vMisdqbuLANGw8vPjtatk0Oxmm9xvOmGgTvTRrS02dAikJqCp
8HdeT4ORYdhwsqDFqbdT5XR1SIt0Pkf3O73zZ4EuMpU5TX8YXWmfrq+MiLjmMZBs
uiA8BChdsaIMi+dmZAWZLCdJodQJhdBpAsUBF54i2juOwFQLvUjsLo/xbh0iRRrx
I/OxJehiAmN1oU9/T0Ss1R8LQ/yEFxEkYtEgQPRN8yzedWOagaNmW8UA9MJXEF9J
N506ya3Prl252qrmD7QwTL6qx5cBYYgyADiiOckKF/6Y44bqny0YovQsIkl1lBAu
Yq5xaLwpBObwih0YVmbc
=xWUY
-----END PGP SIGNATURE-----
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2--
1
1
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
3.4.0-0.13.rc.el6See the error below
User admin failed to grant permission for Role QuotaConsumer on Quota Mobil=
ePolicy to User/Group Non interactive user. =
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div><span class=3D"gwt-InlineLa=
bel"> =3B3.4.0-0.13.rc.el6</span></div><div>See the error below</div><d=
iv><br></div><div><br></div>User admin failed to grant permission for Role =
QuotaConsumer on Quota MobilePolicy to User/Group Non interactive user. =
</div></body>
</html>=
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_--
6
15
[ANN] oVirt 3.6.0 First Release Candidate is now available for testing
by Sandro Bonazzola 28 Sep '15
by Sandro Bonazzola 28 Sep '15
28 Sep '15
The oVirt Project is pleased to announce the availability
of the First Release Candidate of oVirt 3.6 for testing, as of September
28th, 2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
Fedora 21 and Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.0 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0