[JIRA] (OVIRT-1328) Current storage configuration on mirrors server is causiong issues
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1328?page=com.atlassian.jir... ]
Barak Korren reassigned OVIRT-1328:
-----------------------------------
Assignee: Barak Korren (was: infra)
> Current storage configuration on mirrors server is causiong issues
> ------------------------------------------------------------------
>
> Key: OVIRT-1328
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1328
> Project: oVirt - virtualization made easy
> Issue Type: Bug
> Components: Repositories Mgmt
> Reporter: Barak Korren
> Assignee: Barak Korren
> Priority: Highest
>
> It seems the current storage configuration for the mirrors server (xfs on LVM thinpool on virtio device) is causing issues - And the device seems now to be corrupted and blocking writes.
> We should move to a simpler configuration -> plain XFS on the virio device. That should reduce the amount of things that can cause issues.
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
[JIRA] (OVIRT-1328) Current storage configuration on mirrors server is causiong issues
by Barak Korren (oVirt JIRA)
Barak Korren created OVIRT-1328:
-----------------------------------
Summary: Current storage configuration on mirrors server is causiong issues
Key: OVIRT-1328
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1328
Project: oVirt - virtualization made easy
Issue Type: Bug
Components: Repositories Mgmt
Reporter: Barak Korren
Assignee: infra
Priority: Highest
It seems the current storage configuration for the mirrors server (xfs on LVM thinpool on virtio device) is causing issues - And the device seems now to be corrupted and blocking writes.
We should move to a simpler configuration -> plain XFS on the virio device. That should reduce the amount of things that can cause issues.
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
Build failed in Jenkins: system-sync_mirrors-fedora-base-fc26-x86_64 #1
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/system-sync_mirrors-fedora-base-fc26-x86_64/...>
------------------------------------------
[...truncated 63.01 MB...]
(2147/53505): texlive-fn2e 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fn2end-svn15878.1.1-33 FAILED
(2147/53505): texlive-fn2e 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fmtcount-svn37298.3.01 FAILED
(2147/53505): texlive-fmtc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnbreak-doc-svn25003.1 FAILED
(2147/53505): texlive-fnbr 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnbreak-svn25003.1.30- FAILED
(2147/53505): texlive-fnbr 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fncychap-svn20710.v1.3 FAILED
(2147/53505): texlive-fncy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fncychap-doc-svn20710. FAILED
(2147/53505): texlive-fncy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fncylab-svn17382.1.0-3 FAILED
(2147/53505): texlive-fncy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fncylab-doc-svn17382.1 FAILED
(2147/53505): texlive-fncy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnpara-doc-svn25607.0- FAILED
(2147/53505): texlive-fnpa 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnpct-svn40535-33.fc26 FAILED
(2147/53505): texlive-fnpc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fntproof-svn20638.0-33 FAILED
(2147/53505): texlive-fntp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnumprint-svn29173.1.1 FAILED
(2147/53505): texlive-fnum 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnpara-svn25607.0-33.f FAILED
(2147/53505): texlive-fnpa 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnumprint-doc-svn29173 FAILED
(2147/53505): texlive-fnum 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fntproof-doc-svn20638. FAILED
(2147/53505): texlive-fntp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fnpct-doc-svn40535-33. FAILED
(2147/53505): texlive-fnpc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foekfont-svn15878.0-33 FAILED
(2147/53505): texlive-foek 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foekfont-doc-svn15878. FAILED
(2147/53505): texlive-foek 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foilhtml-doc-svn21855. FAILED
(2147/53505): texlive-foil 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foilhtml-svn21855.1.2- FAILED
(2147/53505): texlive-foil 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonetika-svn21326.0-33 FAILED
(2147/53505): texlive-fone 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-font-change-svn40403-3 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-font-change-xetex-svn4 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-font-change-doc-svn404 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonetika-doc-svn21326. FAILED
(2147/53505): texlive-fone 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-font-change-xetex-doc- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontawesome-doc-svn414 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontaxes-svn33276.1.0d FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontawesome-svn41412-3 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontaxes-doc-svn33276. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontbook-doc-svn23608. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontbook-svn23608.0.2- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontch-doc-svn17859.2. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontch-svn17859.2.2-33 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontinst-svn40768-33.f FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontinst-bin-svn29741. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontmfizz-svn35892.0-3 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontinst-doc-svn40768- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontmfizz-doc-svn35892 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontname-svn38345.0-33 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontname-doc-svn38345. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontools-bin-svn25997. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontools-svn38924-33.f FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonts-churchslavonic-s FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontools-doc-svn38924- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonts-churchslavonic-d FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonts-tlwg-svn41366-33 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonts-tlwg-doc-svn4136 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonttable-svn21399.1.6 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fonttable-doc-svn21399 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontspec-doc-svn41262- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontware-svn40768-33.f FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontspec-svn41262-33.f FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontware-bin-svn40473- FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontwrap-svn15878.0-33 FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fontwrap-doc-svn15878. FAILED
(2147/53505): texlive-font 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footbib-svn17115.2.0.7 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footbib-doc-svn17115.2 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footmisc-svn23330.5.5b FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footmisc-doc-svn23330. FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnotebackref-svn270 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnotebackref-doc-sv FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnoterange-svn25430 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnoterange-doc-svn2 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnotehyper-svn40852 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnpag-svn15878.0-33 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnotehyper-doc-svn4 FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-footnpag-doc-svn15878. FAILED
(2147/53505): texlive-foot 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forarray-svn15878.1.01 FAILED
(2147/53505): texlive-fora 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forarray-doc-svn15878. FAILED
(2147/53505): texlive-fora 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foreign-doc-svn27819.2 FAILED
(2147/53505): texlive-fore 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-foreign-svn27819.2.7-3 FAILED
(2147/53505): texlive-fore 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forest-svn40367-33.fc2 FAILED
(2147/53505): texlive-fore 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forloop-svn15878.3.0-3 FAILED
(2147/53505): texlive-forl 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forest-doc-svn40367-33 FAILED
(2147/53505): texlive-fore 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-forloop-doc-svn15878.3 FAILED
(2147/53505): texlive-forl 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-formlett-doc-svn21480. FAILED
(2147/53505): texlive-form 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fouridx-svn32214.2.00- FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-formation-latex-ul-doc FAILED
(2147/53505): texlive-form 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-formular-doc-svn15878. FAILED
(2147/53505): texlive-form 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-formlett-svn21480.2.3- FAILED
(2147/53505): texlive-form 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fouridx-doc-svn32214.2 FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fourier-svn15878.1.3-3 FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-formular-svn15878.1.0a FAILED
(2147/53505): texlive-form 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fourier-doc-svn15878.1 FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fouriernc-svn29646.0-3 FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fouriernc-doc-svn29646 FAILED
(2147/53505): texlive-four 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fp-svn15878.0-33.fc26. FAILED
(2147/53505): texlive-fp-s 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fp-doc-svn15878.0-33.f FAILED
(2147/53505): texlive-fp-d 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fpl-svn15878.1.002-33. FAILED
(2147/53505): texlive-fpl- 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fpl-doc-svn15878.1.002 FAILED
(2147/53505): texlive-fpl- 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fragmaster-svn26313.1. FAILED
(2147/53505): texlive-frag 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fragmaster-bin-svn1366 FAILED
(2147/53505): texlive-frag 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fragmaster-doc-svn2631 FAILED
(2147/53505): texlive-frag 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fragments-svn15878.0-3 FAILED
(2147/53505): texlive-frag 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fragments-doc-svn15878 FAILED
(2147/53505): texlive-frag 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frame-doc-svn18312.1.0 FAILED
(2147/53505): texlive-fram 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frame-svn18312.1.0-33. FAILED
(2147/53505): texlive-fram 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-framed-svn26789.0.96-3 FAILED
(2147/53505): texlive-fram 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-francais-bst-doc-svn38 FAILED
(2147/53505): texlive-fran 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frankenstein-svn15878. FAILED
(2147/53505): texlive-fran 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-framed-doc-svn26789.0. FAILED
(2147/53505): texlive-fram 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-francais-bst-svn38922- FAILED
(2147/53505): texlive-fran 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frankenstein-doc-svn15 FAILED
(2147/53505): texlive-fran 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frcursive-svn24559.0-3 FAILED
(2147/53505): texlive-frcu 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frege-svn27417.1.3-33. FAILED
(2147/53505): texlive-freg 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frcursive-doc-svn24559 FAILED
(2147/53505): texlive-frcu 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frletter-svn15878.0-33 FAILED
(2147/53505): texlive-frle 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frletter-doc-svn15878. FAILED
(2147/53505): texlive-frle 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frege-doc-svn27417.1.3 FAILED
(2147/53505): texlive-freg 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frontespizio-svn24054. FAILED
(2147/53505): texlive-fron 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-frontespizio-doc-svn24 FAILED
(2147/53505): texlive-fron 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ftcap-doc-svn17275.1.4 FAILED
(2147/53505): texlive-ftca 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ftcap-svn17275.1.4-33. FAILED
(2147/53505): texlive-ftca 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ftnxtra-svn29652.0.1-3 FAILED
(2147/53505): texlive-ftnx 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ftnxtra-doc-svn29652.0 FAILED
(2147/53505): texlive-ftnx 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullblck-svn25434.1.03 FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullblck-doc-svn25434. FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullminipage-svn34545. FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullminipage-doc-svn34 FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullwidth-svn24684.0.1 FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fullwidth-doc-svn24684 FAILED
(2147/53505): texlive-full 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-functan-svn15878.0-33. FAILED
(2147/53505): texlive-func 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-functan-doc-svn15878.0 FAILED
(2147/53505): texlive-func 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fundus-calligra-svn260 FAILED
(2147/53505): texlive-fund 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fundus-calligra-doc-sv FAILED
(2147/53505): texlive-fund 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fundus-cyr-svn26019.0- FAILED
(2147/53505): texlive-fund 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fundus-sueterlin-svn26 FAILED
(2147/53505): texlive-fund 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fundus-sueterlin-doc-s FAILED
(2147/53505): texlive-fund 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fwlw-svn29803.0-33.fc2 FAILED
(2147/53505): texlive-fwlw 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-fwlw-doc-svn29803.0-33 FAILED
(2147/53505): texlive-fwlw 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-g-brief-svn21140.4.0.2 FAILED
(2147/53505): texlive-g-br 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-g-brief-doc-svn21140.4 FAILED
(2147/53505): texlive-g-br 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gaceta-svn15878.1.06-3 FAILED
(2147/53505): texlive-gace 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gaceta-doc-svn15878.1. FAILED
(2147/53505): texlive-gace 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-galois-doc-svn15878.1. FAILED
(2147/53505): texlive-galo 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gamebook-doc-svn24714. FAILED
(2147/53505): texlive-galo 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-garrigues-svn15878.0-3 FAILED
(2147/53505): texlive-galo 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-galois-svn15878.1.5-33 FAILED
(2147/53505): texlive-galo 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-garrigues-doc-svn15878 FAILED
(2147/53505): texlive-garr 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gamebook-svn24714.1.0- FAILED
(2147/53505): texlive-game 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gastex-svn15878.2.8-33 FAILED
(2147/53505): texlive-gast 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-garuda-c90-svn37677.0- FAILED
(2147/53505): texlive-garu 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gastex-doc-svn15878.2. FAILED
(2147/53505): texlive-gast 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gatech-thesis-svn19886 FAILED
(2147/53505): texlive-gate 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gatech-thesis-doc-svn1 FAILED
(2147/53505): texlive-gate 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gates-svn29803.0.2-33. FAILED
(2147/53505): texlive-gate 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gates-doc-svn29803.0.2 FAILED
(2147/53505): texlive-gate 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gauss-svn32934.0-33.fc FAILED
(2147/53505): texlive-gaus 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gauss-doc-svn32934.0-3 FAILED
(2147/53505): texlive-gaus 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gb4e-svn19216.0-33.fc2 FAILED
(2147/53505): texlive-gb4e 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gb4e-doc-svn19216.0-33 FAILED
(2147/53505): texlive-gb4e 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gcard-svn15878.0-33.fc FAILED
(2147/53505): texlive-gcar 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gchords-svn29803.1.20- FAILED
(2147/53505): texlive-gcho 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gcard-doc-svn15878.0-3 FAILED
(2147/53505): texlive-gcar 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gchords-doc-svn29803.1 FAILED
(2147/53505): texlive-gcho 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gcite-svn15878.1.0.1-3 FAILED
(2147/53505): texlive-gcit 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gcite-doc-svn15878.1.0 FAILED
(2147/53505): texlive-gcit 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gender-svn36464.1.0-33 FAILED
(2147/53505): texlive-gend 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gender-doc-svn36464.1. FAILED
(2147/53505): texlive-gend 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gene-logic-svn15878.1. FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genealogy-svn25112.0-3 FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gene-logic-doc-svn1587 FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genealogy-doc-svn25112 FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genealogytree-svn38426 FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genmisc-svn27208.0-33. FAILED
(2147/53505): texlive-genm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genealogytree-doc-svn3 FAILED
(2147/53505): texlive-gene 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genmpage-doc-svn15878. FAILED
(2147/53505): texlive-genm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-genmpage-svn15878.0.3. FAILED
(2147/53505): texlive-genm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gentium-tug-doc-svn373 FAILED
(2147/53505): texlive-gent 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gentium-tug-svn37378.1 FAILED
(2147/53505): texlive-gent 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gentle-doc-svn15878.0- FAILED
(2147/53505): texlive-gent 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-geometry-svn19716.5.6- FAILED
(2147/53505): texlive-geom 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-geometry-de-doc-svn218 FAILED
(2147/53505): texlive-geom 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-geometry-doc-svn19716. FAILED
(2147/53505): texlive-geom 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-germbib-svn15878.0-33. FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-german-svn30567.2.5e-3 FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-german-doc-svn30567.2. FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-germkorr-svn15878.1.0- FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-germbib-doc-svn15878.0 FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-germkorr-doc-svn15878. FAILED
(2147/53505): texlive-germ 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-geschichtsfrkl-svn4130 FAILED
(2147/53505): texlive-gesc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-geschichtsfrkl-doc-svn FAILED
(2147/53505): texlive-gesc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getargs-doc-svn41415-3 FAILED
(2147/53505): texlive-geta 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getfiledate-svn16189.1 FAILED
(2147/53505): texlive-getf 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getfiledate-doc-svn161 FAILED
(2147/53505): texlive-getf 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getargs-svn41415-33.fc FAILED
(2147/53505): texlive-geta 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getitems-svn39365-33.f FAILED
(2147/53505): texlive-geti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getitems-doc-svn39365- FAILED
(2147/53505): texlive-geti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getmap-svn35355.1.8-33 FAILED
(2147/53505): texlive-getm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getmap-bin-svn34971.0- FAILED
(2147/53505): texlive-getm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getoptk-doc-svn23567.1 FAILED
(2147/53505): texlive-geto 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getmap-doc-svn35355.1. FAILED
(2147/53505): texlive-getm 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-getoptk-svn23567.1.0-3 FAILED
(2147/53505): texlive-geto 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfnotation-doc-svn3715 FAILED
(2147/53505): texlive-gfno 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfnotation-svn37156.2. FAILED
(2147/53505): texlive-gfno 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsartemisia-svn19469. FAILED
(2147/53505): texlive-gfsa 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsbaskerville-svn1944 FAILED
(2147/53505): texlive-gfsb 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsartemisia-doc-svn19 FAILED
(2147/53505): texlive-gfsa 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsbaskerville-doc-svn FAILED
(2147/53505): texlive-gfsb 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsbodoni-svn28484.1.0 FAILED
(2147/53505): texlive-gfsb 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfscomplutum-svn19469. FAILED
(2147/53505): texlive-gfsc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsbodoni-doc-svn28484 FAILED
(2147/53505): texlive-gfsb 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsdidot-svn31978.0-33 FAILED
(2147/53505): texlive-gfsd 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfscomplutum-doc-svn19 FAILED
(2147/53505): texlive-gfsc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsdidot-doc-svn31978. FAILED
(2147/53505): texlive-gfsd 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsneohellenic-svn3197 FAILED
(2147/53505): texlive-gfsn 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsneohellenic-doc-svn FAILED
(2147/53505): texlive-gfsn 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsporson-doc-svn18651 FAILED
(2147/53505): texlive-gfsp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfssolomos-svn18651.1. FAILED
(2147/53505): texlive-gfsp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfsporson-svn18651.1.0 FAILED
(2147/53505): texlive-gfsp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gfssolomos-doc-svn1865 FAILED
(2147/53505): texlive-gfss 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ghab-svn29803.0.5-33.f FAILED
(2147/53505): texlive-ghab 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ghsystem-svn34925.4.6- FAILED
(2147/53505): texlive-ghsy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ghsystem-doc-svn34925. FAILED
(2147/53505): texlive-ghsy 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ghab-doc-svn29803.0.5- FAILED
(2147/53505): texlive-ghab 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gillcm-svn19878.1.1-33 FAILED
(2147/53505): texlive-gill 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gillcm-doc-svn19878.1. FAILED
(2147/53505): texlive-gill 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gillius-doc-svn32068.0 FAILED
(2147/53505): texlive-gill 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gillius-svn32068.0-33. FAILED
(2147/53505): texlive-gill 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gincltex-svn23835.0.3- FAILED
(2147/53505): texlive-ginc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gincltex-doc-svn23835. FAILED
(2147/53505): texlive-ginc 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ginpenc-doc-svn24980.1 FAILED
(2147/53505): texlive-ginp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitinfo-svn34049.1.0-3 FAILED
(2147/53505): texlive-giti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-ginpenc-svn24980.1.0-3 FAILED
(2147/53505): texlive-ginp 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitinfo-doc-svn34049.1 FAILED
(2147/53505): texlive-giti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitinfo2-svn38913-33.f FAILED
(2147/53505): texlive-giti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitinfo2-doc-svn38913- FAILED
(2147/53505): texlive-giti 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitlog-svn38932-33.fc2 FAILED
(2147/53505): texlive-gitl 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gitlog-doc-svn38932-33 FAILED
(2147/53505): texlive-gitl 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gloss-doc-svn15878.1.5 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gloss-svn15878.1.5.2-3 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gloss-occitan-doc-svn3 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-gloss-occitan-svn39609 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-danish-svn3 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-svn41423-33 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-bin-svn3781 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-doc-svn4142 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-danish-doc- FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-dutch-svn35 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-dutch-doc-s FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-extra-svn41 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-english-doc FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-english-svn FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-extra-doc-s FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-french-svn3 FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA texlive-glossaries-french-doc- FAILED
(2147/53505): texlive-glos 8% [=- ] 7.4 MB/s | 4.8 GB 115:13 ETA jenkins/scripts/mirror_mgr.sh: line 289: 18358 Killed reposync --config="$reposync_conf" --repoid="$repo_name" --arch="$repo_arch" --cachedir="$MIRRORS_CACHE" --download_path="$MIRRORS_MP_BASE/yum/$repo_name/base" --norepopath --newest-only "${extra_args[@]}"
Build step 'Execute shell' marked build as failure
7 years, 7 months
[JIRA] (OVIRT-1323) Re: [ovirt-users] problem registering to users list
by eyal edri [Administrator] (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1323?page=com.atlassian.jir... ]
eyal edri [Administrator] reassigned OVIRT-1323:
------------------------------------------------
Assignee: Marc Dequènes (Duck) (was: infra)
Duck,
Can you have a look and see if there are issues with registering to the user's list?
FYI, there was an outage in mail services during the weekend, so if you tried subscribing during that time, it might be relevant.
> Re: [ovirt-users] problem registering to users list
> ---------------------------------------------------
>
> Key: OVIRT-1323
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1323
> Project: oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: sbonazzo
> Assignee: Marc Dequènes (Duck)
>
> On Wed, Apr 12, 2017 at 11:05 PM, Precht, Andrew <
> Andrew.Precht(a)sjlibrary.org> wrote:
> > Hi all,
> > In the end, I ran this on each host node and is what worked:
> > systemctl stop glusterd && rm -rf /var/lib/glusterd/vols/* && rm -rf
> > /var/lib/glusterd/peers/*
> >
> > Thanks so much for your help.
> >
> > P.S. I work as a sys admin for the San Jose library. Part of my job
> > satisfaction comes from knowing that the work I do here goes directly back
> > into this community. We’r fortunate that you, your coworkers, and Red Hat
> > do so much to give back. I have to imagine you too feel this sense of
> > satisfaction. Thanks again…
> >
> > P.S.S. I never did hear back from users(a)ovirt.org mailing list. I did
> > fill out the fields on this page: https://lists.ovirt.org/
> > mailman/listinfo/users. Yet, everytime I send them an email I get: Your
> > message to Users awaits moderator approval. Is there a secret handshake,
> > I’m not aware of?
> >
> >
> Opening a ticket on infra to check your account on users mailing list.
> > Regards,
> > Andrew
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com>
> > *Sent:* Wednesday, April 12, 2017 10:01:33 AM
> >
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/12/2017 08:45 PM, Precht, Andrew wrote:
> >
> > Hi all,
> >
> > You asked: Any errors in ovirt-engine.log file ?
> >
> > Yes, In the engine.log this error is repeated about every 3 minutes:
> >
> > 2017-04-12 07:16:12,554-07 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob]
> > (DefaultQuartzScheduler3) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Error
> > updating tasks from CLI: org.ovirt.engine.core.common.errors.EngineException:
> > EngineException: Command execution failed error: Error : Request timed out return
> > code: 1 (Failed with error GlusterVolumeStatusAllFailedException and code
> > 4161) error: Error : Request timed out
> >
> > I am not sure why this says "Request timed out".
> >
> > 1) gluster volume list -> Still shows the deleted volume (test1)
> >
> > 2) gluster peer status -> Shows one of the peers twice with different
> > uuid’s:
> >
> > Hostname: 192.168.10.109 Uuid: 42fbb7de-8e6f-4159-a601-3f858fa65f6c State:
> > Peer in Cluster (Connected) Hostname: 192.168.10.109 Uuid:
> > e058babe-7f9d-49fe-a3ea-ccdc98d7e5b5 State: Peer in Cluster (Connected)
> >
> > How did this happen? Are the hostname same for two hosts ?
> >
> > I tried a gluster volume stop test1, with this result: volume stop:
> > test1: failed: Another transaction is in progress for test1. Please try
> > again after sometime.
> >
> > can you restart glusterd and try to stop and delete the volume?
> >
> > The etc-glusterfs-glusterd.vol.log shows no activity triggered by trying
> > to remove the test1 volume from the UI.
> >
> > The ovirt-engine.log shows this repeating many times, when trying to
> > remove the test1 volume from the UI:
> >
> > 2017-04-12 07:57:38,049-07 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> > (DefaultQuartzScheduler9) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Failed
> > to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[
> > b0e1b909-9a6a-49dc-8e20-3a027218f7e1=<GLUSTER, ACTION_TYPE_FAILED_GLUSTER_OPERATION_INPROGRESS>]',
> > sharedLocks='null'}'
> >
> > can you restart ovirt-engine service because i see that "failed to acquire
> > lock". Once ovirt-engine is restarted some one who is holding the lock
> > should be release and things should work fine.
> >
> > Last but not least, if none of the above works:
> >
> > Login to all your nodes in the cluster.
> > rm -rf /var/lib/glusterd/vols/*
> > rm -rf /var/lib/glusterd/peers/*
> > systemctl restart glusterd on all the nodes.
> >
> > Login to UI and see if any volumes / hosts are present. If yes, remove
> > them.
> >
> > This should clear things for you and you can start from basic.
> >
> >
> > Thanks much,
> >
> > Andrew
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 11:10:04 PM
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/12/2017 03:35 AM, Precht, Andrew wrote:
> >
> > I just noticed this in the Alerts tab: Detected deletion of volume test1
> > on cluster 8000-1, and deleted it from engine DB.
> >
> > Yet, It still shows in the web UI?
> >
> > Any errors in ovirt-engine.log file ? if the volume is deleted from db
> > ideally it should be deleted from UI too. Can you go to gluster nodes and
> > check for the following:
> >
> > 1) gluster volume list -> should not return anything since you have
> > deleted the volumes.
> >
> > 2) gluster peer status -> on all the nodes should show that all the peers
> > are in connected state.
> >
> > can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log
> > and capture the error messages when you try deleting the volume from UI?
> >
> > Log what you have pasted in the previous mail only gives info and i could
> > not get any details from that on why volume delete is failing
> >
> > ------------------------------
> > *From:* Precht, Andrew
> > *Sent:* Tuesday, April 11, 2017 2:39:31 PM
> > *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> > Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > The plot thickens…
> > I put all hosts in the cluster into maintenance mode, with the Stop
> > Gluster service checkbox checked. I then deleted the
> > /var/lib/glusterd/vols/test1 directory on all hosts. I then took the host
> > that the test1 volume was on out of maintenance mode. Then I tried to
> > remove the test1 volume from within the web UI. With no luck, I got the
> > message: Could not delete Gluster Volume test1 on cluster 8000-1.
> >
> > I went back and checked all host for the test1 directory, it is not on any
> > host. Yet I still can’t remove it…
> >
> > Any suggestions?
> >
> > ------------------------------
> > *From:* Precht, Andrew
> > *Sent:* Tuesday, April 11, 2017 1:15:22 PM
> > *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> > Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > Here is an update…
> >
> > I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the
> > node that had the trouble volume (test1). I didn’t see any errors. So, I
> > ran a tail -f on the log as I tried to remove the volume using the web UI.
> > here is what was appended:
> >
> > [2017-04-11 19:48:40.756360] I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
> > [2017-04-11 19:48:42.238840] I [MSGID: 106488] [glusterd-handler.c:1537:__
> > glusterd_handle_cli_get_volume] 0-management: Received get vol req
> > The message "I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req"
> > repeated 6 times between [2017-04-11 19:48:40.756360] and [2017-04-11
> > 19:49:32.596536]
> > The message "I [MSGID: 106488] [glusterd-handler.c:1537:__
> > glusterd_handle_cli_get_volume] 0-management: Received get vol req"
> > repeated 20 times between [2017-04-11 19:48:42.238840] and [2017-04-11
> > 19:49:34.082179]
> > [2017-04-11 19:51:41.556077] I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
> >
> > I’m seeing that the timestamps on these log entries do not match the time
> > on the node.
> >
> > The next steps
> > I stopped the glusterd service on the node with volume test1
> > I deleted it with: rm -rf /var/lib/glusterd/vols/test1
> > I started the glusterd service.
> >
> > After starting the gluster service back up, the directory
> > /var/lib/glusterd/vols/test1 reappears.
> > I’m guessing syncing with the other nodes?
> > Is this because I have the Volume Option: auth allow *
> > Do I need to remove the directory /var/lib/glusterd/vols/test1 on all
> > nodes in the cluster individually?
> >
> > thanks
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 11:51:18 AM
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/11/2017 11:28 PM, Precht, Andrew wrote:
> >
> > Hi all,
> > The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
> > On the node I can not find /var/log/glusterfs/glusterd.log However, there
> > is a /var/log/glusterfs/glustershd.log
> >
> > can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> > exists? if yes, can you check if there is any error present in that file ?
> >
> >
> > What happens if I follow the four steps outlined here to remove the volume
> > from the node *BUT*, I do have another volume present in the cluster. It
> > too is a test volume. Neither one has any data on them. So, data loss is
> > not an issue.
> >
> > Running those four steps will remove the volume from your cluster . If the
> > volumes what you have are test volumes you could just follow the steps
> > outlined to delete them (since you are not able to delete from UI) and
> > bring back the cluster into a normal state.
> >
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 10:32:27 AM
> > *To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
> >
> > Adding some people
> >
> > Il 11/Apr/2017 19:06, "Precht, Andrew" <Andrew.Precht(a)sjlibrary.org> ha
> > scritto:
> >
> >> Hi Ovirt users,
> >> I’m a newbie to oVirt and I’m having trouble deleting a test gluster
> >> volume. The nodes are 4.1.1 and the engine is 4.1.0
> >>
> >> When I try to remove the test volume, I click Remove, the dialog box
> >> prompting to confirm the deletion pops up and after I click OK, the dialog
> >> box changes to show a little spinning wheel and then it disappears. In the
> >> end the volume is still there.
> >>
> > with the latest version of glusterfs & ovirt we do not see any issue with
> > deleting a volume. Can you please check /var/log/glusterfs/glusterd.log
> > file if there is any error present?
> >
> >
> > The test volume was distributed with two host members. One of the hosts I
> >> was able to remove from the volume by removing the host form the cluster.
> >> When I try to remove the remaining host in the volume, even with the “Force
> >> Remove” box ticked, I get this response: Cannot remove Host. Server having
> >> Gluster volume.
> >>
> >> What to try next?
> >>
> > since you have already removed the volume from one host in the cluster and
> > you still see it on another host you can do the following to remove the
> > volume from another host.
> >
> > 1) Login to the host where the volume is present.
> > 2) cd to /var/lib/glusterd/vols
> > 3) rm -rf <vol_name>
> > 4) Restart glusterd on that host.
> >
> > And before doing the above make sure that you do not have any other volume
> > present in the cluster.
> >
> > Above steps should not be run on a production system as you might loose
> > the volume and data.
> >
> > Now removing the host from UI should succed.
> >
> >
> >> P.S. I’ve tried to join this user group several times in the past, with
> >> no response.
> >> Is it possible for me to join this group?
> >>
> >> Regards,
> >> Andrew
> >>
> >>
> >
> > _______________________________________________
> > Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> >
> >
> >
> --
> SANDRO BONAZZOLA
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
[JIRA] (OVIRT-1323) Re: [ovirt-users] problem registering to users list
by eyal edri [Administrator] (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1323?page=com.atlassian.jir... ]
eyal edri [Administrator] updated OVIRT-1323:
---------------------------------------------
Summary: Re: [ovirt-users] problem registering to users list (was: Re: [ovirt-users] I’m having trouble deleting a test gluster volume)
> Re: [ovirt-users] problem registering to users list
> ---------------------------------------------------
>
> Key: OVIRT-1323
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1323
> Project: oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: sbonazzo
> Assignee: infra
>
> On Wed, Apr 12, 2017 at 11:05 PM, Precht, Andrew <
> Andrew.Precht(a)sjlibrary.org> wrote:
> > Hi all,
> > In the end, I ran this on each host node and is what worked:
> > systemctl stop glusterd && rm -rf /var/lib/glusterd/vols/* && rm -rf
> > /var/lib/glusterd/peers/*
> >
> > Thanks so much for your help.
> >
> > P.S. I work as a sys admin for the San Jose library. Part of my job
> > satisfaction comes from knowing that the work I do here goes directly back
> > into this community. We’r fortunate that you, your coworkers, and Red Hat
> > do so much to give back. I have to imagine you too feel this sense of
> > satisfaction. Thanks again…
> >
> > P.S.S. I never did hear back from users(a)ovirt.org mailing list. I did
> > fill out the fields on this page: https://lists.ovirt.org/
> > mailman/listinfo/users. Yet, everytime I send them an email I get: Your
> > message to Users awaits moderator approval. Is there a secret handshake,
> > I’m not aware of?
> >
> >
> Opening a ticket on infra to check your account on users mailing list.
> > Regards,
> > Andrew
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com>
> > *Sent:* Wednesday, April 12, 2017 10:01:33 AM
> >
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/12/2017 08:45 PM, Precht, Andrew wrote:
> >
> > Hi all,
> >
> > You asked: Any errors in ovirt-engine.log file ?
> >
> > Yes, In the engine.log this error is repeated about every 3 minutes:
> >
> > 2017-04-12 07:16:12,554-07 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob]
> > (DefaultQuartzScheduler3) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Error
> > updating tasks from CLI: org.ovirt.engine.core.common.errors.EngineException:
> > EngineException: Command execution failed error: Error : Request timed out return
> > code: 1 (Failed with error GlusterVolumeStatusAllFailedException and code
> > 4161) error: Error : Request timed out
> >
> > I am not sure why this says "Request timed out".
> >
> > 1) gluster volume list -> Still shows the deleted volume (test1)
> >
> > 2) gluster peer status -> Shows one of the peers twice with different
> > uuid’s:
> >
> > Hostname: 192.168.10.109 Uuid: 42fbb7de-8e6f-4159-a601-3f858fa65f6c State:
> > Peer in Cluster (Connected) Hostname: 192.168.10.109 Uuid:
> > e058babe-7f9d-49fe-a3ea-ccdc98d7e5b5 State: Peer in Cluster (Connected)
> >
> > How did this happen? Are the hostname same for two hosts ?
> >
> > I tried a gluster volume stop test1, with this result: volume stop:
> > test1: failed: Another transaction is in progress for test1. Please try
> > again after sometime.
> >
> > can you restart glusterd and try to stop and delete the volume?
> >
> > The etc-glusterfs-glusterd.vol.log shows no activity triggered by trying
> > to remove the test1 volume from the UI.
> >
> > The ovirt-engine.log shows this repeating many times, when trying to
> > remove the test1 volume from the UI:
> >
> > 2017-04-12 07:57:38,049-07 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> > (DefaultQuartzScheduler9) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Failed
> > to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[
> > b0e1b909-9a6a-49dc-8e20-3a027218f7e1=<GLUSTER, ACTION_TYPE_FAILED_GLUSTER_OPERATION_INPROGRESS>]',
> > sharedLocks='null'}'
> >
> > can you restart ovirt-engine service because i see that "failed to acquire
> > lock". Once ovirt-engine is restarted some one who is holding the lock
> > should be release and things should work fine.
> >
> > Last but not least, if none of the above works:
> >
> > Login to all your nodes in the cluster.
> > rm -rf /var/lib/glusterd/vols/*
> > rm -rf /var/lib/glusterd/peers/*
> > systemctl restart glusterd on all the nodes.
> >
> > Login to UI and see if any volumes / hosts are present. If yes, remove
> > them.
> >
> > This should clear things for you and you can start from basic.
> >
> >
> > Thanks much,
> >
> > Andrew
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 11:10:04 PM
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/12/2017 03:35 AM, Precht, Andrew wrote:
> >
> > I just noticed this in the Alerts tab: Detected deletion of volume test1
> > on cluster 8000-1, and deleted it from engine DB.
> >
> > Yet, It still shows in the web UI?
> >
> > Any errors in ovirt-engine.log file ? if the volume is deleted from db
> > ideally it should be deleted from UI too. Can you go to gluster nodes and
> > check for the following:
> >
> > 1) gluster volume list -> should not return anything since you have
> > deleted the volumes.
> >
> > 2) gluster peer status -> on all the nodes should show that all the peers
> > are in connected state.
> >
> > can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log
> > and capture the error messages when you try deleting the volume from UI?
> >
> > Log what you have pasted in the previous mail only gives info and i could
> > not get any details from that on why volume delete is failing
> >
> > ------------------------------
> > *From:* Precht, Andrew
> > *Sent:* Tuesday, April 11, 2017 2:39:31 PM
> > *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> > Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > The plot thickens…
> > I put all hosts in the cluster into maintenance mode, with the Stop
> > Gluster service checkbox checked. I then deleted the
> > /var/lib/glusterd/vols/test1 directory on all hosts. I then took the host
> > that the test1 volume was on out of maintenance mode. Then I tried to
> > remove the test1 volume from within the web UI. With no luck, I got the
> > message: Could not delete Gluster Volume test1 on cluster 8000-1.
> >
> > I went back and checked all host for the test1 directory, it is not on any
> > host. Yet I still can’t remove it…
> >
> > Any suggestions?
> >
> > ------------------------------
> > *From:* Precht, Andrew
> > *Sent:* Tuesday, April 11, 2017 1:15:22 PM
> > *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> > Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > Here is an update…
> >
> > I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the
> > node that had the trouble volume (test1). I didn’t see any errors. So, I
> > ran a tail -f on the log as I tried to remove the volume using the web UI.
> > here is what was appended:
> >
> > [2017-04-11 19:48:40.756360] I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
> > [2017-04-11 19:48:42.238840] I [MSGID: 106488] [glusterd-handler.c:1537:__
> > glusterd_handle_cli_get_volume] 0-management: Received get vol req
> > The message "I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req"
> > repeated 6 times between [2017-04-11 19:48:40.756360] and [2017-04-11
> > 19:49:32.596536]
> > The message "I [MSGID: 106488] [glusterd-handler.c:1537:__
> > glusterd_handle_cli_get_volume] 0-management: Received get vol req"
> > repeated 20 times between [2017-04-11 19:48:42.238840] and [2017-04-11
> > 19:49:34.082179]
> > [2017-04-11 19:51:41.556077] I [MSGID: 106487] [glusterd-handler.c:1474:__
> > glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
> >
> > I’m seeing that the timestamps on these log entries do not match the time
> > on the node.
> >
> > The next steps
> > I stopped the glusterd service on the node with volume test1
> > I deleted it with: rm -rf /var/lib/glusterd/vols/test1
> > I started the glusterd service.
> >
> > After starting the gluster service back up, the directory
> > /var/lib/glusterd/vols/test1 reappears.
> > I’m guessing syncing with the other nodes?
> > Is this because I have the Volume Option: auth allow *
> > Do I need to remove the directory /var/lib/glusterd/vols/test1 on all
> > nodes in the cluster individually?
> >
> > thanks
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 11:51:18 AM
> > *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/11/2017 11:28 PM, Precht, Andrew wrote:
> >
> > Hi all,
> > The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
> > On the node I can not find /var/log/glusterfs/glusterd.log However, there
> > is a /var/log/glusterfs/glustershd.log
> >
> > can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> > exists? if yes, can you check if there is any error present in that file ?
> >
> >
> > What happens if I follow the four steps outlined here to remove the volume
> > from the node *BUT*, I do have another volume present in the cluster. It
> > too is a test volume. Neither one has any data on them. So, data loss is
> > not an issue.
> >
> > Running those four steps will remove the volume from your cluster . If the
> > volumes what you have are test volumes you could just follow the steps
> > outlined to delete them (since you are not able to delete from UI) and
> > bring back the cluster into a normal state.
> >
> >
> > ------------------------------
> > *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> > *Sent:* Tuesday, April 11, 2017 10:32:27 AM
> > *To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon
> > Mureinik; Nir Soffer
> > *Cc:* users
> > *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> > volume
> >
> > On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
> >
> > Adding some people
> >
> > Il 11/Apr/2017 19:06, "Precht, Andrew" <Andrew.Precht(a)sjlibrary.org> ha
> > scritto:
> >
> >> Hi Ovirt users,
> >> I’m a newbie to oVirt and I’m having trouble deleting a test gluster
> >> volume. The nodes are 4.1.1 and the engine is 4.1.0
> >>
> >> When I try to remove the test volume, I click Remove, the dialog box
> >> prompting to confirm the deletion pops up and after I click OK, the dialog
> >> box changes to show a little spinning wheel and then it disappears. In the
> >> end the volume is still there.
> >>
> > with the latest version of glusterfs & ovirt we do not see any issue with
> > deleting a volume. Can you please check /var/log/glusterfs/glusterd.log
> > file if there is any error present?
> >
> >
> > The test volume was distributed with two host members. One of the hosts I
> >> was able to remove from the volume by removing the host form the cluster.
> >> When I try to remove the remaining host in the volume, even with the “Force
> >> Remove” box ticked, I get this response: Cannot remove Host. Server having
> >> Gluster volume.
> >>
> >> What to try next?
> >>
> > since you have already removed the volume from one host in the cluster and
> > you still see it on another host you can do the following to remove the
> > volume from another host.
> >
> > 1) Login to the host where the volume is present.
> > 2) cd to /var/lib/glusterd/vols
> > 3) rm -rf <vol_name>
> > 4) Restart glusterd on that host.
> >
> > And before doing the above make sure that you do not have any other volume
> > present in the cluster.
> >
> > Above steps should not be run on a production system as you might loose
> > the volume and data.
> >
> > Now removing the host from UI should succed.
> >
> >
> >> P.S. I’ve tried to join this user group several times in the past, with
> >> no response.
> >> Is it possible for me to join this group?
> >>
> >> Regards,
> >> Andrew
> >>
> >>
> >
> > _______________________________________________
> > Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> >
> >
> >
> --
> SANDRO BONAZZOLA
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
oVirt services outage last Friday [ 21/04/17 ]
by Eyal Edri
FYI,
During last Friday the data center which oVirt services are hosted on had a
major outage which affected some of the oVirt services, like the CI, e-mail
and oVirt repositories.
All systems should be up by now, although, since one the services which
were affected was e-mail, it might be that your emails are still in the
queue waiting to be sent.
If you still experiencing any service downtime, please report to
infra-support(a)ovirt.org.
We are working with the IT team of the DC to get the full RCA will report
once we have more info.
Thanks for the understanding and sorry for the inconvenience,
oVirt infra team
--
Eyal edri
ASSOCIATE MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
7 years, 7 months