engine-setup error
by qinglong.dong@horebdata.cn
This is a multi-part message in MIME format.
------=_001_NextPart152101128521_=----
Content-Type: text/plain;
charset="ISO-8859-1"
Content-Transfer-Encoding: base64
SGksDQogICAgICAgIEkgd2FudGVkIHRvIGluc3RhbGwgd2ViU29ja2V0IHByb3h5IG9uIGFub3Ro
ZXIgaG9zdCwgYnV0IEkgZ290IGFuIGVycm9yIHdoZW4gc2V0dGluZyB1cCBlbmdpbmUuDQoNClty
b290QHByb3h5IH5dIyBlbmdpbmUtc2V0dXAgDQpbIElORk8gIF0gU3RhZ2U6IEluaXRpYWxpemlu
Zw0KWyBJTkZPICBdIFN0YWdlOiBFbnZpcm9ubWVudCBzZXR1cA0KICAgICAgICAgIENvbmZpZ3Vy
YXRpb24gZmlsZXM6IFsnL2V0Yy9vdmlydC1lbmdpbmUtc2V0dXAuY29uZi5kLzEwLXBhY2thZ2lu
Zy1qYm9zcy5jb25mJywgJy9ldGMvb3ZpcnQtZW5naW5lLXNldHVwLmNvbmYuZC8xMC1wYWNrYWdp
bmcuY29uZiddDQogICAgICAgICAgTG9nIGZpbGU6IC92YXIvbG9nL292aXJ0LWVuZ2luZS9zZXR1
cC9vdmlydC1lbmdpbmUtc2V0dXAtMjAxNjAzMTgxMDM2NTgtODh4cGh1LmxvZw0KICAgICAgICAg
IFZlcnNpb246IG90b3BpLTEuNC4xIChvdG9waS0xLjQuMS0xLmVsNikNClsgSU5GTyAgXSBTdGFn
ZTogRW52aXJvbm1lbnQgcGFja2FnZXMgc2V0dXANClsgSU5GTyAgXSBZdW0gRG93bmxvYWRpbmc6
IG92aXJ0LTMuNi1lcGVsL3ByaW1hcnlfZGIgMS45IE0oMzIlKQ0KWyBJTkZPICBdIFl1bSBEb3du
bG9hZGluZzogdmlydGlvLXdpbi1zdGFibGUvcHJpbWFyeV9kYiAoMCUpDQpbIElORk8gIF0gU3Rh
Z2U6IFByb2dyYW1zIGRldGVjdGlvbg0KWyBJTkZPICBdIFN0YWdlOiBFbnZpcm9ubWVudCBzZXR1
cA0KWyBJTkZPICBdIFN0YWdlOiBFbnZpcm9ubWVudCBjdXN0b21pemF0aW9uDQogICAgICAgICAN
CiAgICAgICAgICAtLT09IFBST0RVQ1QgT1BUSU9OUyA9PS0tDQogICAgICAgICANCiAgICAgICAg
ICBDb25maWd1cmUgRW5naW5lIG9uIHRoaXMgaG9zdCAoWWVzLCBObykgW1llc106IG5vDQogICAg
ICAgICAgQ29uZmlndXJlIFZNIENvbnNvbGUgUHJveHkgb24gdGhpcyBob3N0IChZZXMsIE5vKSBb
WWVzXTogDQogICAgICAgICAgQ29uZmlndXJlIFdlYlNvY2tldCBQcm94eSBvbiB0aGlzIGhvc3Qg
KFllcywgTm8pIFtZZXNdOiANCiAgICAgICAgIA0KICAgICAgICAgIC0tPT0gUEFDS0FHRVMgPT0t
LQ0KICAgICAgICAgDQpbIElORk8gIF0gQ2hlY2tpbmcgZm9yIHByb2R1Y3QgdXBkYXRlcy4uLg0K
WyBJTkZPICBdIE5vIHByb2R1Y3QgdXBkYXRlcyBmb3VuZA0KICAgICAgICAgDQogICAgICAgICAg
LS09PSBBTEwgSU4gT05FIENPTkZJR1VSQVRJT04gPT0tLQ0KICAgICAgICAgDQogICAgICAgICAN
CiAgICAgICAgICAtLT09IE5FVFdPUksgQ09ORklHVVJBVElPTiA9PS0tDQogICAgICAgICANCiAg
ICAgICAgICBIb3N0IGZ1bGx5IHF1YWxpZmllZCBETlMgbmFtZSBvZiB0aGlzIHNlcnZlciBbcHJv
eHkuaG9yZWJkYXRhLmNuXTogDQpbV0FSTklOR10gRmFpbGVkIHRvIHJlc29sdmUgcHJveHkuaG9y
ZWJkYXRhLmNuIHVzaW5nIEROUywgaXQgY2FuIGJlIHJlc29sdmVkIG9ubHkgbG9jYWxseQ0KICAg
ICAgICAgIFNldHVwIGNhbiBhdXRvbWF0aWNhbGx5IGNvbmZpZ3VyZSB0aGUgZmlyZXdhbGwgb24g
dGhpcyBzeXN0ZW0uDQogICAgICAgICAgTm90ZTogYXV0b21hdGljIGNvbmZpZ3VyYXRpb24gb2Yg
dGhlIGZpcmV3YWxsIG1heSBvdmVyd3JpdGUgY3VycmVudCBzZXR0aW5ncy4NCiAgICAgICAgICBE
byB5b3Ugd2FudCBTZXR1cCB0byBjb25maWd1cmUgdGhlIGZpcmV3YWxsPyAoWWVzLCBObykgW1ll
c106IA0KWyBJTkZPICBdIGlwdGFibGVzIHdpbGwgYmUgY29uZmlndXJlZCBhcyBmaXJld2FsbCBt
YW5hZ2VyLg0KICAgICAgICAgIEhvc3QgZnVsbHkgcXVhbGlmaWVkIEROUyBuYW1lIG9mIHRoZSBl
bmdpbmUgc2VydmVyIFtwcm94eS5ob3JlYmRhdGEuY25dOiBlbmdpbmUuaG9yZWJkYXRhLmNuDQpb
V0FSTklOR10gRmFpbGVkIHRvIHJlc29sdmUgZW5naW5lLmhvcmViZGF0YS5jbiB1c2luZyBETlMs
IGl0IGNhbiBiZSByZXNvbHZlZCBvbmx5IGxvY2FsbHkNCltXQVJOSU5HXSBGYWlsZWQgdG8gcmVz
b2x2ZSBlbmdpbmUuaG9yZWJkYXRhLmNuIHVzaW5nIEROUywgaXQgY2FuIGJlIHJlc29sdmVkIG9u
bHkgbG9jYWxseQ0KICAgICAgICAgDQogICAgICAgICAgLS09PSBEQVRBQkFTRSBDT05GSUdVUkFU
SU9OID09LS0NCiAgICAgICAgIA0KICAgICAgICAgDQogICAgICAgICAgLS09PSBPVklSVCBFTkdJ
TkUgQ09ORklHVVJBVElPTiA9PS0tDQogICAgICAgICANCiAgICAgICAgIA0KICAgICAgICAgIC0t
PT0gU1RPUkFHRSBDT05GSUdVUkFUSU9OID09LS0NCiAgICAgICAgIA0KICAgICAgICAgIERlZmF1
bHQgU0FOIHdpcGUgYWZ0ZXIgZGVsZXRlIChZZXMsIE5vKSBbTm9dOiANCiAgICAgICAgIA0KICAg
ICAgICAgIC0tPT0gUEtJIENPTkZJR1VSQVRJT04gPT0tLQ0KICAgICAgICAgDQogICAgICAgICAg
U2V0dXAgd2lsbCBuZWVkIHRvIGRvIHNvbWUgYWN0aW9ucyBvbiB0aGUgcmVtb3RlIGVuZ2luZSBz
ZXJ2ZXIuIEVpdGhlciBhdXRvbWF0aWNhbGx5LCB1c2luZyBzc2ggYXMgcm9vdCB0byBhY2Nlc3Mg
aXQsIG9yIHlvdSB3aWxsIGJlIHByb21wdGVkIHRvIG1hbnVhbGx5IHBlcmZvcm0gZWFjaCBzdWNo
IGFjdGlvbi4NCiAgICAgICAgICBQbGVhc2UgY2hvb3NlIG9uZSBvZiB0aGUgZm9sbG93aW5nOg0K
ICAgICAgICAgIDEgLSBBY2Nlc3MgcmVtb3RlIGVuZ2luZSBzZXJ2ZXIgdXNpbmcgc3NoIGFzIHJv
b3QNCiAgICAgICAgICAyIC0gUGVyZm9ybSBlYWNoIGFjdGlvbiBtYW51YWxseSwgdXNlIGZpbGVz
IHRvIGNvcHkgY29udGVudCBhcm91bmQNCiAgICAgICAgICAoMSwgMikgWzFdOiAxDQogICAgICAg
ICAgc3NoIHBvcnQgb24gcmVtb3RlIGVuZ2luZSBzZXJ2ZXIgWzIyXTogDQogICAgICAgICAgcm9v
dCBwYXNzd29yZCBvbiByZW1vdGUgZW5naW5lIHNlcnZlciBlbmdpbmUuaG9yZWJkYXRhLmNuOiAN
ClsgSU5GTyAgXSBTaWduaW5nIHRoZSBXZWJTb2NrZXQgUHJveHkgY2VydGlmaWNhdGUgb24gdGhl
IGVuZ2luZSBzZXJ2ZXINClsgSU5GTyAgXSBXZWJTb2NrZXQgUHJveHkgY2VydGlmaWNhdGUgc2ln
bmVkIHN1Y2Nlc3NmdWxseQ0KICAgICAgICAgDQogICAgICAgICAgLS09PSBBUEFDSEUgQ09ORklH
VVJBVElPTiA9PS0tDQogICAgICAgICANCiAgICAgICAgIA0KICAgICAgICAgIC0tPT0gU1lTVEVN
IENPTkZJR1VSQVRJT04gPT0tLQ0KICAgICAgICAgDQogICAgICAgICANCiAgICAgICAgICAtLT09
IE1JU0MgQ09ORklHVVJBVElPTiA9PS0tDQogICAgICAgICANCiAgICAgICAgIA0KICAgICAgICAg
IC0tPT0gRU5EIE9GIENPTkZJR1VSQVRJT04gPT0tLQ0KICAgICAgICAgDQpbIElORk8gIF0gU3Rh
Z2U6IFNldHVwIHZhbGlkYXRpb24NCiAgICAgICAgICBHZW5lcmF0ZWQgaXB0YWJsZXMgcnVsZXMg
YXJlIGRpZmZlcmVudCBmcm9tIGN1cnJlbnQgb25lcy4NCiAgICAgICAgICBEbyB5b3Ugd2FudCB0
byByZXZpZXcgdGhlbT8gKFllcywgTm8pIFtOb106IA0KICAgICAgICAgDQogICAgICAgICAgLS09
PSBDT05GSUdVUkFUSU9OIFBSRVZJRVcgPT0tLQ0KICAgICAgICAgDQogICAgICAgICAgRGVmYXVs
dCBTQU4gd2lwZSBhZnRlciBkZWxldGUgICAgICAgICAgIDogRmFsc2UNCiAgICAgICAgICBGaXJl
d2FsbCBtYW5hZ2VyICAgICAgICAgICAgICAgICAgICAgICAgOiBpcHRhYmxlcw0KICAgICAgICAg
IFVwZGF0ZSBGaXJld2FsbCAgICAgICAgICAgICAgICAgICAgICAgICA6IFRydWUNCiAgICAgICAg
ICBIb3N0IEZRRE4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOiBwcm94eS5ob3JlYmRh
dGEuY24NCiAgICAgICAgICBFbmdpbmUgaW5zdGFsbGF0aW9uICAgICAgICAgICAgICAgICAgICAg
OiBGYWxzZQ0KICAgICAgICAgIENvbmZpZ3VyZSBWTUNvbnNvbGUgUHJveHkgICAgICAgICAgICAg
ICA6IFRydWUNCiAgICAgICAgICBFbmdpbmUgSG9zdCBGUUROICAgICAgICAgICAgICAgICAgICAg
ICAgOiBlbmdpbmUuaG9yZWJkYXRhLmNuDQogICAgICAgICAgQ29uZmlndXJlIFdlYlNvY2tldCBQ
cm94eSAgICAgICAgICAgICAgIDogVHJ1ZQ0KICAgICAgICAgDQogICAgICAgICAgUGxlYXNlIGNv
bmZpcm0gaW5zdGFsbGF0aW9uIHNldHRpbmdzIChPSywgQ2FuY2VsKSBbT0tdOiANClsgSU5GTyAg
XSBTdGFnZTogVHJhbnNhY3Rpb24gc2V0dXANClsgSU5GTyAgXSBTdG9wcGluZyBlbmdpbmUgc2Vy
dmljZQ0KWyBJTkZPICBdIFN0b3BwaW5nIG92aXJ0LWZlbmNlLWtkdW1wLWxpc3RlbmVyIHNlcnZp
Y2UNClsgSU5GTyAgXSBTdG9wcGluZyB3ZWJzb2NrZXQtcHJveHkgc2VydmljZQ0KWyBJTkZPICBd
IFN0YWdlOiBNaXNjIGNvbmZpZ3VyYXRpb24NClsgSU5GTyAgXSBTdGFnZTogUGFja2FnZSBpbnN0
YWxsYXRpb24NClsgSU5GTyAgXSBTdGFnZTogTWlzYyBjb25maWd1cmF0aW9uDQpbIEVSUk9SIF0g
RmFpbGVkIHRvIGV4ZWN1dGUgc3RhZ2UgJ01pc2MgY29uZmlndXJhdGlvbic6ICdOb25lVHlwZScg
b2JqZWN0IGhhcyBubyBhdHRyaWJ1dGUgJ2V4ZWN1dGUnDQpbIElORk8gIF0gWXVtIFBlcmZvcm1p
bmcgeXVtIHRyYW5zYWN0aW9uIHJvbGxiYWNrDQpbIElORk8gIF0gU3RhZ2U6IENsZWFuIHVwDQog
ICAgICAgICAgTG9nIGZpbGUgaXMgbG9jYXRlZCBhdCAvdmFyL2xvZy9vdmlydC1lbmdpbmUvc2V0
dXAvb3ZpcnQtZW5naW5lLXNldHVwLTIwMTYwMzE4MTAzNjU4LTg4eHBodS5sb2cNClsgSU5GTyAg
XSBHZW5lcmF0aW5nIGFuc3dlciBmaWxlICcvdmFyL2xpYi9vdmlydC1lbmdpbmUvc2V0dXAvYW5z
d2Vycy8yMDE2MDMxODEwMzkwMC1zZXR1cC5jb25mJw0KWyBJTkZPICBdIFN0YWdlOiBQcmUtdGVy
bWluYXRpb24NClsgSU5GTyAgXSBTdGFnZTogVGVybWluYXRpb24NClsgRVJST1IgXSBFeGVjdXRp
b24gb2Ygc2V0dXAgZmFpbGVkDQoNCg0KDQoNCg==
------=_001_NextPart152101128521_=----
Content-Type: text/html;
charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DISO-8859-1"><style>body { line-height: 1.5; }body { font-size: 10.5pt;=
font-family: ????; color: rgb(0, 0, 0); line-height: 1.5; }body { font-si=
ze: 10.5pt; color: rgb(0, 0, 0); line-height: 1.5; }</style></head><body>=
=0A<div><span></span>Hi,</div><div> <span style=3D"font-=
size: 10.5pt; line-height: 1.5; background-color: window;"> I=
wanted to install w</span><span style=3D"font-size: 10.5pt; line-height: =
1.5; background-color: window;">ebSocket proxy on another host, but I got =
an error when setting up engine.</span></div><div><span style=3D"font-size=
: 10.5pt; line-height: 1.5; background-color: window;"><br></span></div><d=
iv>[root@proxy ~]# engine-setup <br>[ INFO =
] Stage: Initializing<br>[ INFO ] Stage:&nb=
sp;Environment setup<br> &nb=
sp; Configuration files: ['/etc/ovirt-engine-setup.co=
nf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-pac=
kaging.conf']<br> &nb=
sp;Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-2016=
0318103658-88xphu.log<br> &=
nbsp; Version: otopi-1.4.1 (otopi-1.4.1-1.el6)<br>[ IN=
FO ] Stage: Environment packages setup<br>[=
INFO ] Yum Downloading: ovirt-3.6-epel/pri=
mary_db 1.9 M(32%)<br>[ INFO ] Yum Dow=
nloading: virtio-win-stable/primary_db (0%)<br>[ INFO =
] Stage: Programs detection<br>[ INFO  =
;] Stage: Environment setup<br>[ INFO ]&nbs=
p;Stage: Environment customization<br> &n=
bsp; <br> =
--=3D=3D PRODUCT OPTIONS =3D=3D--<br>&nbs=
p; <br> &n=
bsp; Configure Engine on =
;this host (Yes, No) [Yes]: no<br> &nb=
sp; Configure VM Consol=
e Proxy on this host (Yes, No) [Yes]:&n=
bsp;<br> Config=
ure WebSocket Proxy on this host (Yes, =
No) [Yes]: <br> &=
nbsp;<br> --=3D=
=3D PACKAGES =3D=3D--<br> &nb=
sp; <br>[ INFO ] Checking for pr=
oduct updates...<br>[ INFO ] No product&nbs=
p;updates found<br> &n=
bsp;<br> --=3D=
=3D ALL IN ONE CONFIGURATION =3D=3D--<br> &n=
bsp; <br> =
<br> &nbs=
p; --=3D=3D NETWORK CONFIGURATION =3D=3D--=
<br> <br> =
Host fully quali=
fied DNS name of this server [proxy.horebdat=
a.cn]: <br>[WARNING] Failed to resolve proxy.hore=
bdata.cn using DNS, it can be resolved =
only locally<br>  =
; Setup can automatically configure the fire=
wall on this system.<br>  =
; Note: automatic configuration of&n=
bsp;the firewall may overwrite current settings.<=
br> Do you=
want Setup to configure the firewall? =
(Yes, No) [Yes]: <br>[ INFO ] iptables=
will be configured as firewall manager.<br>=
Host full=
y qualified DNS name of the engine serv=
er [proxy.horebdata.cn]: engine.horebdata.cn<br>[WARNING] F=
ailed to resolve engine.horebdata.cn using DNS,&n=
bsp;it can be resolved only locally<br>[WARNING]&=
nbsp;Failed to resolve engine.horebdata.cn using =
DNS, it can be resolved only locally<br>&nbs=
p; <br> &n=
bsp; --=3D=3D DATABASE CONFIG=
URATION =3D=3D--<br> &=
nbsp;<br> <br> &=
nbsp; --=3D=3D OVIRT&n=
bsp;ENGINE CONFIGURATION =3D=3D--<br> &nb=
sp; <br> &=
nbsp; <br>  =
;--=3D=3D STORAGE CONFIGURATION =3D=3D--<br> &nb=
sp; <br> &=
nbsp; Default SAN wipe after de=
lete (Yes, No) [No]: <br>  =
; <br> &nb=
sp; --=3D=3D PKI CONFIGURATION =3D=3D--<br> =
; <br> &nb=
sp; Setup will need to&n=
bsp;do some actions on the remote engine&nbs=
p;server. Either automatically, using ssh as =
;root to access it, or you will be =
;prompted to manually perform each such acti=
on.<br> Please&=
nbsp;choose one of the following:<br>  =
; 1 - Access remot=
e engine server using ssh as root<br> &=
nbsp; 2 - Perform=
each action manually, use files to cop=
y content around<br> &n=
bsp; (1, 2) [1]: 1<br> &n=
bsp; ssh port on remote e=
ngine server [22]: <br> =
root password on remote engine=
server engine.horebdata.cn: <br>[ INFO ]&n=
bsp;Signing the WebSocket Proxy certificate on&nb=
sp;the engine server<br>[ INFO ] WebSocket&=
nbsp;Proxy certificate signed successfully<br> &=
nbsp; <br>  =
; --=3D=3D APACHE CONFIGURATION&nbs=
p;=3D=3D--<br> <br>&n=
bsp; <br> =
--=3D=3D SYSTEM CONFIG=
URATION =3D=3D--<br> &=
nbsp;<br> <br> &=
nbsp; --=3D=3D MISC&nb=
sp;CONFIGURATION =3D=3D--<br> &nbs=
p; <br> <b=
r> --=3D=3D&nbs=
p;END OF CONFIGURATION =3D=3D--<br> =
<br>[ INFO ] Stage:&nbs=
p;Setup validation<br>  =
; Generated iptables rules are different&nb=
sp;from current ones.<br> &nb=
sp; Do you want to review them?&=
nbsp;(Yes, No) [No]: <br> &nbs=
p; <br> &n=
bsp; --=3D=3D CONFIGURATION PREVIEW =3D=3D--<br> =
<br> &nbs=
p; Default SAN wipe afte=
r delete &=
nbsp;: False<br>  =
; Firewall manager &nbs=
p; =
: iptables<br> &=
nbsp; Update Firewall &=
nbsp; &nb=
sp; : True<br> &n=
bsp; Host FQDN &n=
bsp; &nbs=
p; =
: proxy.horebdata.cn<br> &nb=
sp; Engine installation =
; &=
nbsp; : False<br>  =
; Configure VMConsole&=
nbsp;Proxy &nbs=
p; : True<br> &nb=
sp; Engine Host FQDN &nb=
sp;  =
; : engine.horebdata.c=
n<br> Configure=
WebSocket Proxy =
: True<br>  =
; <br> &nb=
sp; Please confirm installation sett=
ings (OK, Cancel) [OK]: <br>[ INFO ]&n=
bsp;Stage: Transaction setup<br>[ INFO ] St=
opping engine service<br>[ INFO ] Stopping&=
nbsp;ovirt-fence-kdump-listener service<br>[ INFO ]&n=
bsp;Stopping websocket-proxy service<br>[ INFO ]=
Stage: Misc configuration<br>[ INFO ] =
;Stage: Package installation<br>[ INFO ] St=
age: Misc configuration<br>[ ERROR ] Failed =
to execute stage 'Misc configuration': 'NoneType'=
object has no attribute 'execute'<br>[ INFO=
] Yum Performing yum transaction roll=
back<br>[ INFO ] Stage: Clean up<br> &=
nbsp; Log file is=
located at /var/log/ovirt-engine/setup/ovirt-engine-setup-=
20160318103658-88xphu.log<br>[ INFO ] Generating =
;answer file '/var/lib/ovirt-engine/setup/answers/20160318103900=
-setup.conf'<br>[ INFO ] Stage: Pre-termination<=
br>[ INFO ] Stage: Termination<br>[ ERROR&n=
bsp;] Execution of setup failed</div>=0A<div><br></div=
><hr style=3D"width: 210px; height: 1px;" color=3D"#b5c4df" size=3D"1" ali=
gn=3D"left">=0A<div><br></div>=0A</body></html>
------=_001_NextPart152101128521_=------
8 years, 8 months
Unable to move disk on (non-)stateless machine
by nicolas@devels.es
Hi,
We're running oVirt 3.6.3.4-1. We have 2 storage domains of the same
kind (both iSCSI). We're moving some of the disks from one to other,
preferably without shutting down the machine. When moving a disk with
the VM started, the following error is thrown:
Error while executing action: Cannot move Virtual Machine Disk. The
VM is running as Stateless. Please try again when VM is not running as
Stateless.
This machine is inside a VM pool based on a template that is not
stateless, nor is the VM pool, nor the VM itself.
In [1]: template = api.templates.get(name='templatename')
In [2]: template.get_stateless()
Out[2]: False
In [3]: vm = api.vms.get(name='vmname')
In [4]: vm.get_stateless()
Out[4]: False
Additionally, in the General tab of the VM Pool:
Is Stateless: false
If nothing is stateless, why is the error shown?
Thanks.
Nicolás
8 years, 8 months
ovirt node connection fails
by Bill James
After rebooting a hardware node it can no longer connect in ovirt.
2016-03-17 23:02:46,547 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-9) [7389eed3] SSH error running command
root(a)10.176.3
0.97:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
rm -fr \"${MYTMP}\" > /
dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
"${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True': Command returned failure code 1 during
SSH session 'root(a)10.176.30.97'
2016-03-17 23:02:46,547 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-9) [7389eed3] Exception:
java.io.IOException: Command
returned failure code 1 during SSH session 'root(a)10.176.30.97'
at
org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:527)
[uutils.jar:]
I can run this and it works fine:
[root@ovirt test host-deploy]# HOST="10.176.30.97"; ssh -i
/etc/pki/ovirt-engine/keys/engine_id_rsa "$HOST" 'umask 0077;echo
$OVIRT_TMPDIR; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
ovirt-XXXXXXXXXX)"; echo MYTMP=$MYTMP; trap "chmod -R u+rwX \"${MYTMP}\"" 0
> '
MYTMP=/tmp/ovirt-IhxDxcDd8y
[root@ovirt test host-deploy]# echo $?
0
When I add the tar part it just waits forever.
Only change I made before reboot was uncommenting 'user = "root"' in
/etc/libvirt/qemu.conf so that import-to-ovirt.pl would work.
Another host had same issue. I tried putting in maintenance and
reinstalling and removing; re-adding. Didn't help.
Finally just re-kickstarted the host and that one works fine now,
complete with the qemu.conf change.
Attached are the engine.log, vdsm.log and supervdsm.log.
I have done some modifications trying to get vdsm hook to work (see
"multiple NICs VLAN ID conflict" thread), but I don't think that is
related, especially since rekickstarting one host fixed it.
ovirt-engine-3.6.3.4-1.el7.centos.noarch
Thanks!
8 years, 8 months
How do I start an VM in a different 3.6 cluster
by Bond, Darryl
I have 2 clusters in one Data Centre , 1xNehalem and 1xSandyBridge I can live migrate from Nehalem to SandyBridge.
I cannot change a cluster location for a stopped VM or choose which cluster to Run in.
The Cluster pulldown menu for the VM implies that it should be possible but there is only one choice, the cluster the VM last ran in.
I can find nothing in the documentation etc.
Thanks in advance.
Darryl
________________________________
The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.
8 years, 8 months
oVirt - rescan new copied disks to storage
by paf1@email.cz
This is a multi-part message in MIME format.
--------------050803010802080105020109
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
how can I put a copy of VM disks ( outside ovirt envir. copy ) to ovirt
inventory ?? ( available for " attach disk" option visibility )
regs.
Pavel
--------------050803010802080105020109
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
how can I put a copy of VM disks ( outside ovirt envir. copy ) to
ovirt inventory ?? ( available for " attach disk" option
visibility ) <br>
<br>
regs. <br>
Pavel<br>
</body>
</html>
--------------050803010802080105020109--
8 years, 8 months
Fwd: Re: question mark on VM ( DB status 8 )
by paf1@email.cz
This is a multi-part message in MIME format.
--------------090207090508000404060502
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
URGENT
-------- Forwarded Message --------
Subject: Re: [ovirt-users] question mark on VM ( DB status 8 )
Date: Thu, 17 Mar 2016 16:43:54 +0200
From: Nir Soffer <nsoffer(a)redhat.com>
To: paf1(a)email.cz <paf1(a)email.cz>
Can you send this to the users list?
This looks like virt issue, so it should be checked by the guys
working on this pars of the code.
Thanks,
Nir
On Thu, Mar 17, 2016 at 4:07 PM, paf1(a)email.cz <paf1(a)email.cz> wrote:
> Hi Nir,
> look at piece of logs which are repeated in cycle.
>
> The main issue happened about 3-5AM today ( 17.Mar)
>
> CSA_EBSDB_TEST2 - was shutted down from OS , but status was not updated in
> oVirt GUI ( changed manually in DB ( status 1 )) , still one other VM is in
> status "8" due snapshot locked file ( sf-sh-s07) .
>
> engine.log
> ==========
>
> repeately hours by hours ... continually
>
> 2016-03-17 14:38:21,146 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 5a34e053
> 2016-03-17 14:38:21,830 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@240192c6,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@753f6685,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@79a21b20,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a4634e44,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@fd990620,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@57883869,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3b458bc8,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@80f225de,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ec4c19bd,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@947dc2e4,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f773ab98},
> log id: 5a34e053
> 2016-03-17 14:38:27,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 24e7703f
> 2016-03-17 14:38:27,801 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72f0f4,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@89bfd4dd,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f6cb25b,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f4bb56bf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@e0121f88,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@435fc00f,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7b23bf23,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1f8e886,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1fbbe1c1,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@87c991cd,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2fc8ef3e},
> log id: 24e7703f
> 2016-03-17 14:38:33,097 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2e987652
> 2016-03-17 14:38:33,809 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@22f57524,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@229b8873,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d9e0727e,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3e54e436,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a32a922d,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ef616411,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@987712e1,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@21786d69,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3411ecb4,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9ccdb073,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ae4e2f13},
> log id: 2e987652
> 2016-03-17 14:38:39,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2d9df607
> 2016-03-17 14:38:39,812 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@24ba8cf2,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@be52739d,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@fa7acd26,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@bfa54163,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a50ab364,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@c85c798b,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4404dc57,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a87b6b00,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@58e582ba,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@127588cb,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@101be9b2},
> log id: 2d9df607
> 2016-03-17 14:38:45,136 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 4b3faf1c
> 2016-03-17 14:38:45,152 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) START,
> GlusterTasksListVDSCommand(HostName = 1kvm1, HostId =
> 98c4520a-bcff-45b2-8f66-e360e10e1fb2), log id: 1df75525
> 2016-03-17 14:38:45,400 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) FINISH, GlusterTasksListVDSCommand,
> return: [], log id: 1df75525
> 2016-03-17 14:38:45,814 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@53cb3de0,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2cf0ae7f,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@bac16bbe,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@5b460bdf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@47213703,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@714406f8,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@94740550,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@c00582e3,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@e5687263,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0163ead,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1e1ea424},
> log id: 4b3faf1c
>
>
> vdsm.log
> this block repeate non stop ..
>
> Thread-798::DEBUG::2016-03-17
> 14:41:07,108::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,109::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,111::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,113::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,114::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,521::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,560::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000523763 s, 1.9 MB/s\n'; <rc> = 0
> Thread-180::DEBUG::2016-03-17
> 14:41:07,565::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,566::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,616::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000606695 s, 549 kB/s\n'; <rc> = 0
> Thread-158::INFO::2016-03-17
> 14:41:07,616::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 (id: 3)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,619::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 successfully acquired (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,620::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000476478 s, 699 kB/s\n'; <rc> = 0
> Thread-180::INFO::2016-03-17
> 14:41:07,623::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 42d710a9-b844-43dc-be41-77002d1cd553 (id: 3)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,624::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 42d710a9-b844-43dc-be41-77002d1cd553 successfully acquired (id: 3)
> Thread-126::INFO::2016-03-17
> 14:41:07,626::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,626::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 successfully acquired (id: 3)
> Thread-1897022::DEBUG::2016-03-17
> 14:41:07,701::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'GlusterVolume.list' in bridge with {}
> Thread-113::DEBUG::2016-03-17
> 14:41:07,704::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-113::DEBUG::2016-03-17
> 14:41:07,747::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n335 bytes (335 B) copied,
> 0.000568018 s, 590 kB/s\n'; <rc> = 0
> Thread-198::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-188::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,811::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000455407 s, 731 kB/s\n'; <rc> = 0
> Thread-188::DEBUG::2016-03-17
> 14:41:07,815::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n734 bytes (734 B) copied,
> 0.000535009 s, 1.4 MB/s\n'; <rc> = 0
> Thread-198::INFO::2016-03-17
> 14:41:07,826::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 88adbd49-62d6-45b1-9992-b04464a04112 (id: 3)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,828::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 88adbd49-62d6-45b1-9992-b04464a04112 successfully acquired (id: 3)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,838::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,870::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000564777 s, 1.8 MB/s\n'; <rc> = 0
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,883::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 43.
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,889::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 47.
>
>
>
>
> On 17.3.2016 12:58, Nir Soffer wrote:
>
> On Thu, Mar 17, 2016 at 11:35 AM, paf1(a)email.cz <paf1(a)email.cz> wrote:
>
> I used that, but lock active in a few seconds again.
> And oVirt do not update any VM's status
>
> Unlocking entities is ok when you know that the operation that took
> the lock is finished
> or failed. This is a workaround for buggy operations leaving disks in
> locked state, not
> a normal way to use the system.
>
> We first must understand what is the flow that caused the snapshot to
> be locked, and
> why it remain locked.
>
> Please describe in detail the operations in the engine side, and
> provide engine and vdsm
> logs showing this timeframe.
>
> Nir
>
> Pa.
>
>
> On 17.3.2016 10:26, Eli Mesika wrote:
>
>
>
> ________________________________
>
> From: paf1(a)email.cz
> To: "users" <users(a)ovirt.org>
> Sent: Thursday, March 17, 2016 9:27:11 AM
> Subject: [ovirt-users] question mark on VM ( DB status 8 )
>
> Hello,
> during backup
>
> What do you mean by "backup"? Can you describe how do you backup the vm?
>
> VM hanged with question mark in ovirt and status 8 in DB,
> snapshot file ( for backup )is locked.
> How to clean snapshot locking a wake up this VM from "unknow" state ???
>
>
> Try using the unlock_entity.sh utility (run with --help for usage)
>
>
>
> regs.
> pavel
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------090207090508000404060502
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
<br>
<div class="moz-forward-container">URGENT<br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0" cellpadding="0"
cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] question mark on VM ( DB status 8 )</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date: </th>
<td>Thu, 17 Mar 2016 16:43:54 +0200</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From: </th>
<td>Nir Soffer <a class="moz-txt-link-rfc2396E" href="mailto:nsoffer@redhat.com"><nsoffer(a)redhat.com></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td><a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a> <a class="moz-txt-link-rfc2396E" href="mailto:paf1@email.cz"><paf1(a)email.cz></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<pre>Can you send this to the users list?
This looks like virt issue, so it should be checked by the guys
working on this pars of the code.
Thanks,
Nir
On Thu, Mar 17, 2016 at 4:07 PM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a> <a class="moz-txt-link-rfc2396E" href="mailto:paf1@email.cz"><paf1(a)email.cz></a> wrote:
> Hi Nir,
> look at piece of logs which are repeated in cycle.
>
> The main issue happened about 3-5AM today ( 17.Mar)
>
> CSA_EBSDB_TEST2 - was shutted down from OS , but status was not updated in
> oVirt GUI ( changed manually in DB ( status 1 )) , still one other VM is in
> status "8" due snapshot locked file ( sf-sh-s07) .
>
> engine.log
> ==========
>
> repeately hours by hours ... continually
>
> 2016-03-17 14:38:21,146 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 5a34e053
> 2016-03-17 14:38:21,830 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@240192c6,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@753f6685,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@79a21b20,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a4634e44,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@fd990620,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@57883869,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3b458bc8,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@80f225de,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ec4c19bd,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@947dc2e4,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f773ab98},
> log id: 5a34e053
> 2016-03-17 14:38:27,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 24e7703f
> 2016-03-17 14:38:27,801 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72f0f4,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@89bfd4dd,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f6cb25b,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f4bb56bf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@e0121f88,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@435fc00f,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7b23bf23,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1f8e886,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1fbbe1c1,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@87c991cd,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2fc8ef3e},
> log id: 24e7703f
> 2016-03-17 14:38:33,097 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2e987652
> 2016-03-17 14:38:33,809 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@22f57524,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@229b8873,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d9e0727e,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3e54e436,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a32a922d,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ef616411,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@987712e1,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@21786d69,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@3411ecb4,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9ccdb073,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ae4e2f13},
> log id: 2e987652
> 2016-03-17 14:38:39,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2d9df607
> 2016-03-17 14:38:39,812 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@24ba8cf2,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@be52739d,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@fa7acd26,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@bfa54163,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a50ab364,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@c85c798b,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4404dc57,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a87b6b00,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@58e582ba,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@127588cb,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@101be9b2},
> log id: 2d9df607
> 2016-03-17 14:38:45,136 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 4b3faf1c
> 2016-03-17 14:38:45,152 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) START,
> GlusterTasksListVDSCommand(HostName = 1kvm1, HostId =
> 98c4520a-bcff-45b2-8f66-e360e10e1fb2), log id: 1df75525
> 2016-03-17 14:38:45,400 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) FINISH, GlusterTasksListVDSCommand,
> return: [], log id: 1df75525
> 2016-03-17 14:38:45,814 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@53cb3de0,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@2cf0ae7f,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@bac16bbe,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@5b460bdf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@47213703,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@714406f8,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@94740550,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@c00582e3,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@e5687263,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0163ead,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1e1ea424},
> log id: 4b3faf1c
>
>
> vdsm.log
> this block repeate non stop ..
>
> Thread-798::DEBUG::2016-03-17
> 14:41:07,108::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,109::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,111::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,113::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,114::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,521::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,560::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000523763 s, 1.9 MB/s\n'; <rc> = 0
> Thread-180::DEBUG::2016-03-17
> 14:41:07,565::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,566::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,616::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000606695 s, 549 kB/s\n'; <rc> = 0
> Thread-158::INFO::2016-03-17
> 14:41:07,616::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 (id: 3)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,619::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 successfully acquired (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,620::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000476478 s, 699 kB/s\n'; <rc> = 0
> Thread-180::INFO::2016-03-17
> 14:41:07,623::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 42d710a9-b844-43dc-be41-77002d1cd553 (id: 3)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,624::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 42d710a9-b844-43dc-be41-77002d1cd553 successfully acquired (id: 3)
> Thread-126::INFO::2016-03-17
> 14:41:07,626::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,626::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 successfully acquired (id: 3)
> Thread-1897022::DEBUG::2016-03-17
> 14:41:07,701::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'GlusterVolume.list' in bridge with {}
> Thread-113::DEBUG::2016-03-17
> 14:41:07,704::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-113::DEBUG::2016-03-17
> 14:41:07,747::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n335 bytes (335 B) copied,
> 0.000568018 s, 590 kB/s\n'; <rc> = 0
> Thread-198::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-188::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,811::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000455407 s, 731 kB/s\n'; <rc> = 0
> Thread-188::DEBUG::2016-03-17
> 14:41:07,815::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n734 bytes (734 B) copied,
> 0.000535009 s, 1.4 MB/s\n'; <rc> = 0
> Thread-198::INFO::2016-03-17
> 14:41:07,826::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 88adbd49-62d6-45b1-9992-b04464a04112 (id: 3)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,828::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 88adbd49-62d6-45b1-9992-b04464a04112 successfully acquired (id: 3)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,838::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,870::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000564777 s, 1.8 MB/s\n'; <rc> = 0
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,883::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 43.
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,889::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 47.
>
>
>
>
> On 17.3.2016 12:58, Nir Soffer wrote:
>
> On Thu, Mar 17, 2016 at 11:35 AM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a> <a class="moz-txt-link-rfc2396E" href="mailto:paf1@email.cz"><paf1(a)email.cz></a> wrote:
>
> I used that, but lock active in a few seconds again.
> And oVirt do not update any VM's status
>
> Unlocking entities is ok when you know that the operation that took
> the lock is finished
> or failed. This is a workaround for buggy operations leaving disks in
> locked state, not
> a normal way to use the system.
>
> We first must understand what is the flow that caused the snapshot to
> be locked, and
> why it remain locked.
>
> Please describe in detail the operations in the engine side, and
> provide engine and vdsm
> logs showing this timeframe.
>
> Nir
>
> Pa.
>
>
> On 17.3.2016 10:26, Eli Mesika wrote:
>
>
>
> ________________________________
>
> From: <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1(a)email.cz</a>
> To: "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users(a)ovirt.org></a>
> Sent: Thursday, March 17, 2016 9:27:11 AM
> Subject: [ovirt-users] question mark on VM ( DB status 8 )
>
> Hello,
> during backup
>
> What do you mean by "backup"? Can you describe how do you backup the vm?
>
> VM hanged with question mark in ovirt and status 8 in DB,
> snapshot file ( for backup )is locked.
> How to clean snapshot locking a wake up this VM from "unknow" state ???
>
>
> Try using the unlock_entity.sh utility (run with --help for usage)
>
>
>
> regs.
> pavel
>
> _______________________________________________
> Users mailing list
> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
>
>
>
>
> _______________________________________________
> Users mailing list
> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
>
>
</pre>
<br>
</div>
<br>
</body>
</html>
--------------090207090508000404060502--
8 years, 8 months
Re: [ovirt-users] Howto Backup and Restore on new Server
by Taste-Of-IT
Hello Yaniv,
ah ok understand. Export Domain will in Version X not longer supportet.
Do you have any link to this information?
thx
Am 2016-03-17 14:48, schrieb Yaniv Dary:
> As removing it in the future for replacement flows.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: ydary(a)redhat.com
> IRC : ydary
>
> On Tue, Mar 15, 2016 at 2:11 PM, Taste-Of-IT <kontakt(a)taste-of-it.de>
> wrote:
>
>> Hello Yaniv,
>>
>> that was ironical, or? Deprecating like deny or refuse?
>>
>> Am 2016-03-15 12:16, schrieb Yaniv Dary:
>>
>> Since we will be deprecating export domain to import storage domain
>> in
>> the future, I would use it.
>> It should be a much better experience with less downtime.
>>
>> Yaniv Dary
>> Technical Product Manager
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>> Tel : +972 (9) 7692306 [1]
>> 8272306
>> Email: ydary(a)redhat.com
>> IRC : ydary
>>
>> On Tue, Mar 15, 2016 at 1:03 PM, Taste-Of-IT
>> <kontakt(a)taste-of-it.de>
>> wrote:
>>
>> Hello Yaniv,
>>
>> thanks for your hint. Seems a good idea. Meanwhile i created a
>> export domain, with the path to a mountpoint and exported the VMs.
>> Next is to reinstall the server and import VMs. What is in your
>> point of view the better solution or where are the pros and contrs?
>>
>> thx
>>
>> Am 2016-03-15 11:58, schrieb Yaniv Dary:
>>
>> You can move the VM to a new storage domain and use import storage
>> domain feature to move VMs from one oVirt to another.
>>
>> Yaniv Dary
>> Technical Product Manager
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>> Tel : +972 (9) 7692306 [1]
>> 8272306
>> Email: ydary(a)redhat.com
>> IRC : ydary
>>
>> On Mon, Mar 14, 2016 at 5:38 PM, Taste-Of-IT
>> <kontakt(a)taste-of-it.de>
>> wrote:
>>
>> Hello,
>> i want to backup a view VMs, delete the old Server, install on same
>> Hardware a new one and restore the VMs. I use oVirt as All-in-One.
>> What is the best Way? Can i simply copy the full snapshot to an
>> external usb-disk, or should i use a external backup domain for
>> that.? And Restore simply copy the snapshot backup to the VM Domain
>> or use the same backup domain to import the VMs?
>>
>> Thx
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users [2] [2] [1]
>>
>> Links:
>> ------
>> [1] http://lists.ovirt.org/mailman/listinfo/users [2] [2]
>>
>> Links:
>> ------
>> [1] tel:%2B972%20%289%29%207692306
>> [2] http://lists.ovirt.org/mailman/listinfo/users [2]
>
>
>
> Links:
> ------
> [1] tel:%2B972%20%289%29%207692306
> [2] http://lists.ovirt.org/mailman/listinfo/users
8 years, 8 months
question mark on VM ( DB status 8 )
by paf1@email.cz
This is a multi-part message in MIME format.
--------------070308030207010901040704
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
during backup VM hanged with question mark in ovirt and status 8 in DB,
snapshot file ( for backup )is locked.
How to clean snapshot locking a wake up this VM from "unknow" state ???
regs.
pavel
--------------070308030207010901040704
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
during backup VM hanged with question mark in ovirt and status 8 in
DB, snapshot file ( for backup )is locked.<br>
How to clean snapshot locking a wake up this VM from "unknow" state
???<br>
<br>
regs.<br>
pavel<br>
</body>
</html>
--------------070308030207010901040704--
8 years, 8 months
[hosted-engine] Setup broke host's network
by Wee Sritippho
Hi,
I setup the host's network while installing CentOS 7 (GUI), so the
network configuration is like this:
eno1 --> bond0_slave1 --\
|--> bond0
eno2 --> bond0_slave2 --/
After I disabled NetworkManager and ran 'hosted-engine --deploy', the
setup stuck at this line:
[ INFO ] Configuring the management bridge
Then the ssh connection is lost. I accessed the console and found this
line after the line above:
[ ERROR ] Failed to execute stage 'Misc configuration': Connection to
storage server failed
And the network is kinda break. I have to 1. delete MASTER=bond0 and
SLAVE=yes line in ifcfg-eno{1,2} config files 2. re-config ifcfg-bond0
to get static IP 3. turnoff and delete ovirtmgmt bridge 4. restart the
network in order to make it live again.
Did this network configuration really break the setup or it was
something else? If the network configuration is the cause, how can I
proceed to install oVirt hosted-engine?
I attached the answer file, installation log, vdsm.log and supervdsm.log
with this email.
Environment:
- CentOS Linux release 7.2.1511 (Core)
- ovirt-release36-003-1.noarch
- ovirt-hosted-engine-setup-1.3.3.4-1.el7.centos.noarch
- vdsm-4.17.23-1.el7.noarch
Thank you,
Wee
---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
8 years, 8 months
Re: [ovirt-users] 3.6.3 : Disable Networkmanager?
by Edward Haas
On Thu, Mar 17, 2016 at 9:23 AM, Nicolas Ecarnot <nicolas(a)ecarnot.net> wrote:
> Le 17/03/2016 07:32, Edward Haas a écrit :
>>
>> Hello Nicolas,
>>
>> Please let us know which docs you refer to.
>
>
> http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-pa...
>
>> Engine by itself should work fine with NM.
>
>
> I'm installing my 5th 3.6.3 oVirt DC this month, and I'm still witnessing
> that when keeping NM_CONTROLLED=yes in my
> /etc/sysconfig/network-script/ifcfg-blahblahblah files, the network setup
> doesn't resist a reboot (we're using bonding + vlan, and it all goes well
> with NO networkmanager).
Do you refer to the Host/s (which run the hypervisor with the VM/s) or
the Engine?
If you refer to the Engine, if setting there bonding and vlans
manually (using ifcfg files?), it may collide with NM in certain
cases, but that is a general issue, not related to Engine
specifically.
You could define the bonds and vlans using NM (nmcli).
>
>
>>
>> The only exception is when you try to run on the same host VDSM (which
>> is not recommended).
>>
>> Thanks,
>> Edy.
>>
>> On Tue, Mar 15, 2016 at 4:44 PM, Nicolas Ecarnot <nicolas(a)ecarnot.net>
>> wrote:
>>>
>>> Hello,
>>>
>>> Several docs are contradictory about NetworkManager on the engine.
>>> Do we have to disable it in 3.6.3? (we're using CentOS 7.2)
>>>
>>> Thank you.
>>>
>>> --
>>> Nicolas ECARNOT
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Nicolas ECARNOT
8 years, 8 months