Re: [ovirt-devel] [CQ]: 86865, 8 (otopi) failed "ovirt-master" system tests, but isn't the failure root cause
by Yedidyah Bar David
On Mon, Jan 29, 2018 at 1:47 PM, oVirt Jenkins <jenkins(a)ovirt.org> wrote:
> A system test invoked by the "ovirt-master" change queue including change
> 86865,8 (otopi) failed. However, this change seems not to be the root cause for
> this failure. Change 86679,4 (otopi) that this change depends on or is based
> on, was detected as the cause of the testing failures.
>
> This change had been removed from the testing queue. Artifacts built from this
> change will not be released until either change 86679,4 (otopi) is fixed and
> this change is updated to refer to or rebased on the fixed version, or this
> change is modified to no longer depend on it.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/86865/8
>
> For further details about the change that seems to be the root cause behind the
> testing failures see:
> https://gerrit.ovirt.org/#/c/86679/4
That's indeed the reason for the failure.
This patch adds to otopi a check for before=/after= parameters for
event methods.
It causes CI to fail due to bad ones in engine-setup, should be fixed by:
https://gerrit.ovirt.org/86858
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5149/
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
--
Didi
6 years, 9 months
what does the vm.merge do in vdsm?
by Pen
This is a multi-part message in MIME format.
------=_NextPart_5A69FAF8_0A3DD098_26818FA0
Content-Type: text/plain;
charset="gb18030"
Content-Transfer-Encoding: base64
aGkNCiAgICAgIHdoYXQgbWVhbnMgaW4gdmRzbSBhcGkgbWVyZ2UncyBwYXJhbSAiZHJpdmUu
dm9sdW1lSUQiIGluIFsxXaO/SWYgSSB3YW50IGRlbGV0ZSAic25hcHNob3QxIiBpbiBzbmFw
c2hvdCBjaGFpbiBbM10sIA0KImRyaXZlLnZvbHVtZUlEIiBtZWFucyAic25hcHNob3QyIiBv
ciAgImFjdGl2ZV9pbWciPyBJIGhhdmUgc2VlbiBhYm91dCBkb2N1bWVudCBpbiBbMl0sIHRo
ZW4gSSBoYXZlIGEgcXVlc3Rpb24sIA0KSWYgd2Ugd2FudCBkZWxldGUgInNuYXBzaG90MSIg
aW4gWzRdLCBhbmQgInNuYXBzaG90MSIgaXMgInNuYXBzaG90MiIncyBzcmNWb2x1bWUgYW5k
ICJzbmFwc2hvdDMiJ3Mgc3JjVm9sdW1lIGF0IHRoZSANCnNhbWUgdGltZSwgd2hhdCBzaG91
bGQgSSBkbz8gbWVyZ2UgdHdpY2U/DQoNCg0KDQoNClsxXSANCkBhcGkubWV0aG9kDQpkZWYg
bWVyZ2Uoc2VsZiwgZHJpdmUsIGJhc2VWb2xVVUlELCB0b3BWb2xVVUlELCBiYW5kd2lkdGg9
MCwgam9iVVVJRD1Ob25lKToNCiAgICByZXR1cm4gc2VsZi52bS5tZXJnZSgNCiAgICAgICAg
ZHJpdmUsIGJhc2VWb2xVVUlELCB0b3BWb2xVVUlELCBiYW5kd2lkdGgsIGpvYlVVSUQpDQoN
Cg0KWzJdIGh0dHBzOi8vd3d3Lm92aXJ0Lm9yZy9kZXZlbG9wL3JlbGVhc2UtbWFuYWdlbWVu
dC9mZWF0dXJlcy9zdG9yYWdlL2xpdmUtbWVyZ2UvDQoNClszXSANCmJhY2tpbmdfaW1nIC0t
LSBzbmFwc2hvdDEgLS0tIHNuYXBzaG90MiAtLS0gYWN0aXZlX2ltZw0KDQpbNF0NCmJhY2tp
bmdfaW1nIC0tLSBzbmFwc2hvdDEgLS0tIHNuYXBzaG90MiAtLS0gYWN0aXZlX2ltZw0KICAg
ICAgICAgICAgICAgICAgICAgfA0KICAgICAgICAgICAgICAgICAgICAgfA0KICAgICAgICAg
ICAgICAgICAgICAgLS0tLS0gc25hcHNob3Qz
------=_NextPart_5A69FAF8_0A3DD098_26818FA0
Content-Type: text/html;
charset="gb18030"
Content-Transfer-Encoding: base64
aGk8YnI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHdoYXQgbWVhbnMgaW4gdmRz
bSBhcGkgbWVyZ2UncyBwYXJhbSAiZHJpdmUudm9sdW1lSUQiIGluIFsxXaO/SWYgSSB3YW50
IGRlbGV0ZSAic25hcHNob3QxIiBpbiBzbmFwc2hvdCBjaGFpbiBbM10sIDxicj4iZHJpdmUu
dm9sdW1lSUQiIG1lYW5zICJzbmFwc2hvdDIiIG9yJm5ic3A7ICJhY3RpdmVfaW1nIj8gSSBo
YXZlIHNlZW4gYWJvdXQgZG9jdW1lbnQgaW4gWzJdLCB0aGVuIEkgaGF2ZSBhIHF1ZXN0aW9u
LCA8YnI+SWYgd2Ugd2FudCBkZWxldGUgInNuYXBzaG90MSIgaW4gWzRdLCBhbmQgInNuYXBz
aG90MSIgaXMgInNuYXBzaG90MiIncyBzcmNWb2x1bWUgYW5kICJzbmFwc2hvdDMiJ3Mgc3Jj
Vm9sdW1lIGF0IHRoZSA8YnI+c2FtZSB0aW1lLCB3aGF0IHNob3VsZCBJIGRvPyBtZXJnZSB0
d2ljZT88YnI+PGJyPjxicj48YnI+PGJyPlsxXSA8YnI+PHByZSBzdHlsZT0iYmFja2dyb3Vu
ZC1jb2xvcjojZmZmZmZmO2NvbG9yOiMwMDAwMDA7Zm9udC1mYW1pbHk6J0RlamFWdSBTYW5z
IE1vbm8nO2ZvbnQtc2l6ZToxMS4zcHQ7Ij48c3BhbiBzdHlsZT0iY29sb3I6IzAwMDBiMjsi
PkBhcGkubWV0aG9kPGJyPjwvc3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzAwMDA4MDtmb250
LXdlaWdodDpib2xkOyI+ZGVmIDwvc3Bhbj5tZXJnZSg8c3BhbiBzdHlsZT0iY29sb3I6Izk0
NTU4ZDsiPnNlbGY8L3NwYW4+LCBkcml2ZSwgYmFzZVZvbFVVSUQsIHRvcFZvbFVVSUQsIGJh
bmR3aWR0aD08c3BhbiBzdHlsZT0iY29sb3I6IzAwMDBmZjsiPjA8L3NwYW4+LCBqb2JVVUlE
PTxzcGFuIHN0eWxlPSJjb2xvcjojMDAwMDgwOyI+Tm9uZTwvc3Bhbj4pOjxicj4gICAgPHNw
YW4gc3R5bGU9ImNvbG9yOiMwMDAwODA7Zm9udC13ZWlnaHQ6Ym9sZDsiPnJldHVybiA8L3Nw
YW4+PHNwYW4gc3R5bGU9ImNvbG9yOiM5NDU1OGQ7Ij5zZWxmPC9zcGFuPi52bS5tZXJnZSg8
YnI+ICAgICAgICBkcml2ZSwgYmFzZVZvbFVVSUQsIHRvcFZvbFVVSUQsIGJhbmR3aWR0aCwg
am9iVVVJRCk8YnI+PGJyPjxicj5bMl0gaHR0cHM6Ly93d3cub3ZpcnQub3JnL2RldmVsb3Av
cmVsZWFzZS1tYW5hZ2VtZW50L2ZlYXR1cmVzL3N0b3JhZ2UvbGl2ZS1tZXJnZS88YnI+PGJy
PlszXSA8YnI+YmFja2luZ19pbWcgLS0tIHNuYXBzaG90MSAtLS0gc25hcHNob3QyIC0tLSBh
Y3RpdmVfaW1nPGJyPjxicj5bNF08YnI+YmFja2luZ19pbWcgLS0tIHNuYXBzaG90MSAtLS0g
c25hcHNob3QyIC0tLSBhY3RpdmVfaW1nPGJyPiAgICAgICAgICAgICAgICAgICAgIHw8YnI+
ICAgICAgICAgICAgICAgICAgICAgfDxicj4gICAgICAgICAgICAgICAgICAgICAtLS0tLSBz
bmFwc2hvdDM8YnI+PC9wcmU+
------=_NextPart_5A69FAF8_0A3DD098_26818FA0--
6 years, 9 months
[ OST Failure Report ] [ oVirt Master (vcredist-x86) ] [ 26-01-2018 ] [ 002_bootstrap.verify_add_all_hosts ]
by Dafna Ron
Hi,
We had a failure on test 002_bootstrap.verify_add_all_hosts in Ovirt Master
I don't think the patch reported is related to the issue.
>From what I can see, the host was locked when we tried to query its status
and since we couldn't fence the host we exited in timeout.
I do not think the reported patch was the reason for failure.
*Link and headline of reported patches:
https://gerrit.ovirt.org/#/c/82234/1 <https://gerrit.ovirt.org/#/c/82234/1>
- Initial import from ovirt-wgt-toolchainLink to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/>Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/a...
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5123/artifact/>(Relevant)
error snippet from the log: <error>2018-01-26 06:37:31,022-05 INFO
[org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
(ForkJoinPool-1-worker-4) [75b3e642] Failed to Acquire Lock to object
'EngineLock:{exclusiveLocks='[52ee1a61-31c3-4af8-ae28-65f0953f1804=VDS,
HOST_NETWORK52ee1a61-31c3-4af8-ae28-65f0953f1804=HOST_NETWORK]',
sharedLocks=''}'2018-01-26 06:37:31,022-05 WARN
[org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand]
(ForkJoinPool-1-worker-4) [75b3e642] Validation of action
'RefreshHostCapabilities' failed for user SYSTEM. Reasons:
VAR__ACTION__REFRESH,VAR__TYPE__HOST_CAPABILITIES,ACTION_TYPE_FAILED_OBJECT_LOCKED2018-01-26
06:37:31,373-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp
Reactor) [1d883766]
MESSAGEcontent-length:80destination:jms.topic.vdsm_responsescontent-type:application/jsonsubscription:ce434ca4-0c91-44b6-843a-7f8cff40d8e4{"jsonrpc":
"2.0", "id": "8f1c8652-044c-4051-85d7-3d290ee2014f", "result":
true}^@2018-01-26 06:37:31,374-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received: {"jsonrpc": "2.0", "id":
"8f1c8652-044c-4051-85d7-3d290ee2014f", "result": true}2018-01-26
06:37:31,374-05 ERROR [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient]
(ResponseWorker) [] Not able to update response for
"8f1c8652-044c-4051-85d7-3d290ee2014f"2018-01-26 06:37:31,376-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp
Reactor) [1d883766] MESSAGEcontent-length:802018-01-26 06:37:35,147-05
ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
(EE-ManagedThreadFactory-engineScheduled-Thread-8) [78ab5d20] Can not run
fence action on host 'lago-basic-suite-master-host-0', no suitable proxy
host was found.</error>*
6 years, 10 months
How did vm.merge do when we delete a snapshot that pointed by several other's snapshot's srcVol (as several other snapshot's backing) ?
by pengyixiang
------=_Part_119726_124618439.1516957895504
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
aGVsbG8sIGV2ZXJ5b25lIQogICAgICBJIGhhdmUgc2VlbiBhYm91dCBkb2N1bWVudCBpbiBbMl0g
YWJvdXQgdm0ubWVyZ2UgaW4gWzFdLCB0aGVuIEkgaGF2ZSBhIHF1ZXN0aW9uLCBJZiB3ZSB3YW50
IGRlbGV0ZSAic25hcHNob3QxIiBpbiBbM10sIGFuZCAic25hcHNob3QxIiBpcyAic25hcHNob3Qy
IidzIHNyY1ZvbHVtZSBhbmQgInNuYXBzaG90MyIncyBzcmNWb2x1bWUgYXQgdGhlIAoKc2FtZSB0
aW1lLCB3aGF0IHNob3VsZCBJIGRvPyAgbGlidmlydCB3aWxsIG1lcmdlIHNuYXBzaG90MSB0byBz
bmFwc2hvdDIgYW5kIHNuYXBzaG90Mz8gIG9yIG90aGVyIHdheXMgZGlkIGZvciBlZmZpY2llbmN5
PwoKCgoKVGhhbmtzIGZvciB5b3VyIGhlbHAgSWYgSSBjYW4gZ2V0IGEgcmVwbHkuCgoKCgoKWzFd
IAoKQGFwaS5tZXRob2QKZGVmIG1lcmdlKHNlbGYsIGRyaXZlLCBiYXNlVm9sVVVJRCwgdG9wVm9s
VVVJRCwgYmFuZHdpZHRoPTAsIGpvYlVVSUQ9Tm9uZSk6CnJldHVybiBzZWxmLnZtLm1lcmdlKAog
ICAgICAgIGRyaXZlLCBiYXNlVm9sVVVJRCwgdG9wVm9sVVVJRCwgYmFuZHdpZHRoLCBqb2JVVUlE
KQoKClsyXSBodHRwczovL3d3dy5vdmlydC5vcmcvZGV2ZWxvcC9yZWxlYXNlLW1hbmFnZW1lbnQv
ZmVhdHVyZXMvc3RvcmFnZS9saXZlLW1lcmdlLwoKClszXQpiYWNraW5nX2ltZyAtLS0gc25hcHNo
b3QxIC0tLSBzbmFwc2hvdDIgLS0tIGFjdGl2ZV9pbWcKICAgICAgICAgICAgICAgICAgICAgfAog
ICAgICAgICAgICAgICAgICAgICB8CiAgICAgICAgICAgICAgICAgICAgIC0tLS0tIHNuYXBzaG90
Mw==
------=_Part_119726_124618439.1516957895504
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogJnF1b3Q7bHVjaWRh
IEdyYW5kZSZxdW90OywgVmVyZGFuYSwgJnF1b3Q7TWljcm9zb2Z0IFlhSGVpJnF1b3Q7OyI+aGVs
bG8sIGV2ZXJ5b25lITwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtsdWNpZGEg
R3JhbmRlJnF1b3Q7LCBWZXJkYW5hLCAmcXVvdDtNaWNyb3NvZnQgWWFIZWkmcXVvdDs7Ij48c3Bh
biBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmEs
ICZxdW90O01pY3Jvc29mdCBZYUhlaSZxdW90OzsiPiZuYnNwOyZuYnNwOyZuYnNwOyAmbmJzcDsg
PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogJnF1b3Q7bHVjaWRhIEdyYW5kZSZxdW90
OywgVmVyZGFuYSwgJnF1b3Q7TWljcm9zb2Z0IFlhSGVpJnF1b3Q7OyI+SSBoYXZlIHNlZW4gYWJv
dXQgZG9jdW1lbnQgaW4gWzJdIGFib3V0IHZtLm1lcmdlIGluIFsxXSwgdGhlbiBJIGhhdmUgYSBx
dWVzdGlvbiwmbmJzcDs8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtsdWNp
ZGEgR3JhbmRlJnF1b3Q7LCBWZXJkYW5hLCAmcXVvdDtNaWNyb3NvZnQgWWFIZWkmcXVvdDs7Ij5J
ZiB3ZSB3YW50IGRlbGV0ZSAic25hcHNob3QxIiBpbiBbM10sIGFuZCAic25hcHNob3QxIiBpcyAi
c25hcHNob3QyIidzIHNyY1ZvbHVtZSBhbmQgInNuYXBzaG90MyIncyBzcmNWb2x1bWUgYXQgdGhl
Jm5ic3A7PC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUm
cXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhlaSZxdW90OzsiPjxkaXY+PHNwYW4g
c3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtsdWNpZGEgR3JhbmRlJnF1b3Q7LCBWZXJkYW5hLCAm
cXVvdDtNaWNyb3NvZnQgWWFIZWkmcXVvdDs7Ij5zYW1lIHRpbWUsIHdoYXQgc2hvdWxkIEkgZG8/
Jm5ic3A7IGxpYnZpcnQgd2lsbCBtZXJnZSBzbmFwc2hvdDEgdG8gc25hcHNob3QyIGFuZCBzbmFw
c2hvdDM/Jm5ic3A7IG9yIG90aGVyIHdheXMgZGlkIGZvciBlPC9zcGFuPjxzcGFuIHN0eWxlPSJj
b2xvcjogcmdiKDUxLCA1MSwgNTEpOyBmb250LWZhbWlseTogYXJpYWw7IGZvbnQtc2l6ZTogMTNw
eDsiPmZmaWNpZW5jeT88L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6
ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhl
aSZxdW90OzsiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6
ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhl
aSZxdW90OzsiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6
ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhl
aSZxdW90OzsiPlRoYW5rcyBmb3IgeW91ciBoZWxwIElmIEkgY2FuIGdldCBhIHJlcGx5Ljwvc3Bh
bj48L2Rpdj48ZGl2PjxiciBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUm
cXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhlaSZxdW90OzsiPjwvZGl2PjxkaXY+
PGJyPjwvZGl2PjxiciBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVv
dDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZYUhlaSZxdW90OzsiPjxzcGFuIHN0eWxlPSJm
b250LWZhbWlseTogJnF1b3Q7bHVjaWRhIEdyYW5kZSZxdW90OywgVmVyZGFuYSwgJnF1b3Q7TWlj
cm9zb2Z0IFlhSGVpJnF1b3Q7OyI+WzFdJm5ic3A7PC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1p
bHk6ICZxdW90O2x1Y2lkYSBHcmFuZGUmcXVvdDssIFZlcmRhbmEsICZxdW90O01pY3Jvc29mdCBZ
YUhlaSZxdW90OzsiPjxwcmUgc3R5bGU9ImxpbmUtaGVpZ2h0OiAyNS42MTMzcHg7IGZvbnQtZmFt
aWx5OiAmcXVvdDtEZWphVnUgU2FucyBNb25vJnF1b3Q7OyBmb250LXNpemU6IDExLjNwdDsiPjxz
cGFuIHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDE3OCk7Ij5AYXBpLm1ldGhvZDxicj48L3NwYW4+
PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMTI4KTsgZm9udC13ZWlnaHQ6IGJvbGQ7Ij5k
ZWYgPC9zcGFuPm1lcmdlKDxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDE0OCwgODUsIDE0MSk7Ij5z
ZWxmPC9zcGFuPiwgZHJpdmUsIGJhc2VWb2xVVUlELCB0b3BWb2xVVUlELCBiYW5kd2lkdGg9PHNw
YW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMjU1KTsiPjA8L3NwYW4+LCBqb2JVVUlEPTxzcGFu
IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDEyOCk7Ij5Ob25lPC9zcGFuPik6PGJyPiAgICA8c3Bh
biBzdHlsZT0iY29sb3I6IHJnYigwLCAwLCAxMjgpOyBmb250LXdlaWdodDogYm9sZDsiPnJldHVy
biA8L3NwYW4+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMTQ4LCA4NSwgMTQxKTsiPnNlbGY8L3Nw
YW4+LnZtLm1lcmdlKDxicj4gICAgICAgIGRyaXZlLCBiYXNlVm9sVVVJRCwgdG9wVm9sVVVJRCwg
YmFuZHdpZHRoLCBqb2JVVUlEKTxicj48YnI+PGJyPlsyXSBodHRwczovL3d3dy5vdmlydC5vcmcv
ZGV2ZWxvcC9yZWxlYXNlLW1hbmFnZW1lbnQvZmVhdHVyZXMvc3RvcmFnZS9saXZlLW1lcmdlLzxi
cj48YnI+PGJyPlszXTxicj5iYWNraW5nX2ltZyAtLS0gc25hcHNob3QxIC0tLSBzbmFwc2hvdDIg
LS0tIGFjdGl2ZV9pbWc8YnI+ICAgICAgICAgICAgICAgICAgICAgfDxicj4gICAgICAgICAgICAg
ICAgICAgICB8PGJyPiAgICAgICAgICAgICAgICAgICAgIC0tLS0tIHNuYXBzaG90MzwvcHJlPjwv
ZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVhc2Vmb290ZXIiPjxwPiZuYnNwOzwvcD48L3Nw
YW4+
------=_Part_119726_124618439.1516957895504--
6 years, 10 months
Problem when loading images via web interface
by Dmitry Semenov
While loading disk image (via the web interface) in cluster01 on storage01, storage02 - everything is going well.
While loading disk image (via the web interface) in cluster02 on storage03, storage04 - the problem occurs, the image isn't loaded, the process stops at the stage: paused by System (at the same time loading straightly through API goes without problems).
screenshot: https://yadi.sk/i/9WtkDlT23Riqxp
Logs are applied (engine.log): https://pastebin.com/54k5j7hC
image size: ~1.3 GB
my scheme:
data_center_01
cluster01
host01 \
host02 - storage01, storage02
host03 /
cluster02
host04 \
host05 - storage03, storage04
host06 /
HostedEngine in cluster01
oVirt: Version 4.2.0.2-1.el7.centos
--
Best regards,
6 years, 10 months
[25-1-18] [ OST Failure Report] [oVirt Master (vdsm)] [post-002_bootstrap]
by Daniel Belenky
Hi,
We failed to setup host in OST upgrade from 4.1 to master suite.
Please note that the upgrade suite installs 4.1 engine, then upgrades it to
master and then tries to set up a host.
*Links:*
1. Link to failed job
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5093/artifa...>
2. Suspected patch: Gerrit 86474/33
<https://gerrit.ovirt.org/#/c/86474/33>
*Error snippet from engine.log (engine):*
2018-01-24 15:13:20,257-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511),
An error has occurred during installation of Host
lago-upgrade-from-release-suite-master-host0: Failed to execute stage
'Closing up': Failed to start service 'vdsmd'.
2018-01-24 15:13:20,301-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Stage:
Clean up.
2018-01-24 15:13:20,304-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Stage:
Pre-termination.
2018-01-24 15:13:20,332-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0.
Retrieving installation logs to:
'/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20180124151320-lago-upgrade-from-release-suite-master-host0-34609a2f.log'.
2018-01-24 15:13:29,227-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [34609a2f] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
Installing Host lago-upgrade-from-release-suite-master-host0. Stage:
Termination.
2018-01-24 15:13:29,321-05 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] SSH error running
command root@lago-upgrade-from-release-suite-master-host0:'umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)";
trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr
\"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy
DIALOG/dialect=str:machine DIALOG/customization=bool:True':
IOException: Command returned failure code 1 during SSH session
'root@lago-upgrade-from-release-suite-master-host0'
2018-01-24 15:13:29,322-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] Error during host
lago-upgrade-from-release-suite-master-host0 install
2018-01-24 15:13:29,324-05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] Host installation
failed for host '4d681c3b-e8db-4a71-b5e3-0db096e3ae9c',
'lago-upgrade-from-release-suite-master-host0': Command returned
failure code 1 during SSH session
'root@lago-upgrade-from-release-suite-master-host0'
2018-01-24 15:13:29,330-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] START,
SetVdsStatusVDSCommand(HostName =
lago-upgrade-from-release-suite-master-host0,
SetVdsStatusVDSCommandParameters:{hostId='4d681c3b-e8db-4a71-b5e3-0db096e3ae9c',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
5e6c4a3e
2018-01-24 15:13:29,339-05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] FINISH,
SetVdsStatusVDSCommand, log id: 5e6c4a3e
2018-01-24 15:13:29,346-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [34609a2f] EVENT_ID:
VDS_INSTALL_FAILED(505), Host
lago-upgrade-from-release-suite-master-host0 installation failed.
Command returned failure code 1 during SSH session
'root@lago-upgrade-from-release-suite-master-host0'.
*Error snippet from /var/log/messages (host0):*
15:13:19 host0 NetworkManager[580]: <info> (bondscan-Ncw7DP): new
Bond device (carrier: OFF, driver: 'bonding', ifindex: 4)
15:13:19 host0 vdsm-tool: Traceback (most recent call last):
15:13:19 host0 vdsm-tool: File "/usr/bin/vdsm-tool", line 219, in main
15:13:19 host0 vdsm-tool: return tool_command[cmd]["command"](*args)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/tool/network.py", line 97, in
dump_bonding_options
15:13:19 host0 vdsm-tool: sysfs_options_mapper.dump_bonding_options()
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 48, in dump_bonding_options
15:13:19 host0 vdsm-tool: jdump(_get_default_bonding_options(), f)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 60, in _get_default_bonding_options
15:13:19 host0 vdsm-tool: with _bond_device(bond_name):
15:13:19 host0 vdsm-tool: File "/usr/lib64/python2.7/contextlib.py",
line 17, in __enter__
15:13:19 host0 vdsm-tool: return self.gen.next()
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 102, in _bond_device
15:13:19 host0 vdsm-tool: _unmanage_nm_device(bond_name)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 116, in _unmanage_nm_device
15:13:19 host0 vdsm-tool: dev.managed = False
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/networkmanager.py",
line 90, in managed
15:13:19 host0 vdsm-tool: self._device.managed = value
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py",
line 81, in managed
15:13:19 host0 vdsm-tool: return self._set_property('Managed', value)
15:13:19 host0 vdsm-tool: File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py",
line 88, in _set_property
15:13:19 host0 vdsm-tool: self.IF_NAME, property_name, property_value)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 70, in
__call__
15:13:19 host0 vdsm-tool: return self._proxy_method(*args, **keywords)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 145, in
__call__
15:13:19 host0 vdsm-tool: **keywords)
15:13:19 host0 vdsm-tool: File
"/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in
call_blocking
15:13:19 host0 vdsm-tool: message, timeout)
15:13:19 host0 vdsm-tool: DBusException:
org.freedesktop.DBus.Error.AccessDenied: Property "Managed" of
interface "org.freedesktop.NetworkManager.Device" is not settable
Thanks,
--
DANIEL BELENKY
RHV DEVOPS
6 years, 10 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine-sdk) ] [24-01-2018 ] [ 007_sd_reattach.deactivate_storage_domain ]
by Dafna Ron
Hi,
We failed 007_sd_reattach.deactivate_storage_domain once again for trying
to de-activate a storage domain with running tasks.
I did not find the bug I thought I opened on this issue in the past so I
opened a new one.
https://bugzilla.redhat.com/show_bug.cgi?id=1538228
*Link and headline of suspected patches: Patch is not related *to the
issue.
*Link to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5076/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5076/>Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5076/a...
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5076/artifact/>(Relevant)
error snippet from the log: <error>2018-01-24 09:12:01,070-05 WARN
[org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand]
(default task-30) [87107c27-275d-4fee-a456-013615e817cc] Validation of
action 'DeactivateStorageDomainWithOvfUpdate' failed for user
admin@internal-authz. Reasons:
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ERROR_CANNOT_DEACTIVATE_DOMAIN_WITH_TASKS2018-01-24
09:12:01,071-05 INFO
[org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand]
(default task-30) [87107c27-275d-4fee-a456-013615e817cc] Lock freed to
object
'EngineLock:{exclusiveLocks='[2b834f56-9469-4c01-98bf-531d41812fe4=STORAGE]',
sharedLocks='[3e20e50b-4536-4f03-9bd3-7f02d321e734=POOL]'}'2018-01-24
09:12:01,075-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-30) [87107c27-275d-4fee-a456-013615e817cc] method: runAction,
params: [DeactivateStorageDomainWithOvfUpdate,
DeactivateStorageDomainWithOvfUpdateParameters:{commandId='df244508-5d1c-4485-82e6-11bc426072bc',
user='null', commandType='Unknown'}], timeElapsed: 70ms2018-01-24
09:12:01,079-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-30) [] Operation Failed: [Cannot deactivate Storage while there are
running tasks on this Storage.-Please wait until tasks will finish and try
again.]</error>*
6 years, 10 months
[ OST Failure Report ] [ oVirt Master (vdsm)] [ 24-01-2018 ] [ 002_bootstrap.add_hosts ]
by Dafna Ron
Hi,
We have a failure in test 002_bootstrap.add_hosts on Ovirt-Master in
project vdsm.
The issue seems to be that we fail to configure vdsm correctly.
*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/86474/ <https://gerrit.ovirt.org/#/c/86474/> -
*
*net: unmanage bonds used for options gatheringLink to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5067
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5067>Link to
all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5067/a...
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5067/artifact/>(Relevant)
error snippet from the log: <error>Running configure...Reconfiguration of
abrt is done.Reconfiguration of passwd is done.WARNING: LVM local
configuration: /etc/lvm/lvmlocal.conf is not based on vdsm
configurationBacking up /etc/lvm/lvmlocal.conf to
/etc/lvm/lvmlocal.conf.201801231944Installing /usr/share/vdsm/lvmlocal.conf
at /etc/lvm/lvmlocal.confUnits need configuration: {'lvm2-lvmetad.service':
{'LoadState': 'loaded', 'ActiveState': 'active'}, 'lvm2-lvmetad.socket':
{'LoadState': 'loaded', 'ActiveState': 'active'}}Reconfiguration of lvm is
done.Reconfiguration of sebool is done.Reconfiguration of libvirt is
done.Reconfiguration of sanlock is done.2018-01-23 19:44:08,411-0500 DEBUG
otopi.plugins.ovirt_host_deploy.vdsm.packages plugin.execute:926
execute-output: ('/usr/bin/vdsm-tool', 'configure', '--force')
stderr:Traceback (most recent call last): File "/usr/bin/vdsm-tool", line
219, in main return tool_command[cmd]["command"](*args) File
"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py", line 38, in
wrapper func(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line 141, in
configure _configure(c) File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line 88, in
_configure getattr(module, 'configure', lambda: None)() File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/bond_defaults.py",
line 40, in configure sysfs_options_mapper.dump_bonding_options() File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 48, in dump_bonding_options jdump(_get_default_bonding_options(),
f) File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 60, in _get_default_bonding_options with _bond_device(bond_name):
File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ return
self.gen.next() File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 102, in _bond_device _unmanage_nm_device(bond_name) File
"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py",
line 116, in _unmanage_nm_device dev.managed = False File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/networkmanager.py", line
90, in managed self._device.managed = value File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py", line
81, in managed return self._set_property('Managed', value) File
"/usr/lib/python2.7/site-packages/vdsm/network/nm/nmdbus/device.py", line
88, in _set_property self.IF_NAME, property_name, property_value) File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 70, in
__call__ return self._proxy_method(*args, **keywords) File
"/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 145, in
__call__ **keywords) File
"/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in
call_blocking message, timeout)DBusException:
org.freedesktop.DBus.Error.AccessDenied: Property "Managed" of interface
"org.freedesktop.NetworkManager.Device" is not settable2018-01-23
19:44:08,412-0500 WARNING otopi.plugins.ovirt_host_deploy.vdsm.packages
packages._reconfigure:151 Cannot configure vdsm2018-01-23 19:44:08,413-0500
DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204
DIALOG:SEND **%EventEnd STAGE closeup METHOD
otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._reconfigure
(None)2018-01-23 19:44:08,414-0500 DEBUG otopi.context
context._executeMethod:128 Stage closeup METHOD
otopi.plugins.otopi.network.firewalld.Plugin._closeup2018-01-23
19:44:08,414-0500 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventStart STAGE closeup METHOD
otopi.plugins.otopi.network.firewalld.Plugin._closeup (None):</error>*
6 years, 10 months