Timeout for Hosts
by Sven Achtelik
--_000_BFAB40933B3367488CE6299BAF8592D1014E52CC4584SOCRATESasl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi All,
I have 2 hosts which are at remote locations where the ISP is forcing a con=
nection reset after some days. During that reset the connection will be dow=
n for at most 2 minutes and the engine starts to complain about the hosts n=
ot being reachable. What is the right value to tweak to compensate this ?
Is it on of these: TimeoutToResetVdsInSeconds, VdsRefreshRate, vdsTimeout ?=
And is it possible to only apply this for a certain cluster or DC or is it=
global ?
Thank you,
Sven
--_000_BFAB40933B3367488CE6299BAF8592D1014E52CC4584SOCRATESasl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type" CONTENT=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE link=3D"#0563C1" v=
link=3D"#954F72"><div class=3DWordSection1><p class=3DMsoNormal>Hi All, <o:=
p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>=
<span lang=3DEN-US>I have 2 hosts which are at remote locations where the I=
SP is forcing a connection reset after some days. During that reset the con=
nection will be down for at most 2 minutes and the engine starts to complai=
n about the hosts not being reachable. What is the right value to tweak to =
compensate this ? <o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DE=
N-US>Is it on of these: TimeoutToResetVdsInSeconds, VdsRefreshRate, vdsTime=
out ? And is it possible to only apply this for a certain cluster or DC or =
is it global ? <o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-U=
S style=3D'mso-fareast-language:DE'><o:p> </o:p></span></p><p class=3D=
MsoNormal><span lang=3DEN-US style=3D'mso-fareast-language:DE'>Thank you, <=
o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US style=3D'mso-f=
areast-language:DE'><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'mso-fareast-language:DE'>Sven </span><o:p></o:p></p><=
/div></body></html>=
--_000_BFAB40933B3367488CE6299BAF8592D1014E52CC4584SOCRATESasl_--
7 years, 5 months
Building ovirt-host-deploy gives `configure: error: otopi-devtools required but missing`
by Leni Kadali Mutungi
Hello. I got the error mentioned in the subject when when trying to
build ovirt-host-deploy. Full output here:
https://paste.fedoraproject.org/paste/Dh7FF1XjhDa2TGNSBs~o815M1UNdIGYhyRL...
The troubleshooting I've done so far is as follows:
1. Rebuilt otopi, running the commands `autoreconf -ivf` and then
`./configure --enable-java-sdk --with-maven`. This made `make`
download a host of stuff for otopi project from
https://repo.maven.apache.org
It did this again when I ran `make install`. When I went back to the
ovirt-host-deploy directory, I ran `./configure`, both with the
--enable-java-sdk and --with-maven options. Got the same error again.
2. I tried running entering the directory src/java and running `mvn
install`. I received a build success message and then proceeded to run
./configure --build=x86_64-linux-gnu --host=x86_64-linux-gnu
--program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin
--sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share
--includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var
--sharedstatedir=/var/lib --mandir=/usr/share/man
--disable-python-syntax-check --enable-java-sdk
--with-local-version=otopi-1.7.0.master
COMMONS_LOGGING_JAR=/usr/share/java/commons-logging.jar
JUNIT_JAR=/usr/share/java/junit.jar
I omitted the --disable-dependency-tracking and
--docdir=/usr/share/doc/otopi-1.7.0 options because of the following
error: configure: WARNING: unrecognized options:
--disable-dependency-tracking and the fact that I didn't have the
directory /usr/share/doc/otopi-1.7.0 respectively. I got the following
output: https://paste.fedoraproject.org/paste/90D0k1AyVPGDbNhk1WxlBl5M1UNdIGYhyRL...
Running `sudo make install` gives me the same errors. It seems otopi
is failing to compile properly and as a result, running ./configure
within the ovirt-host-deploy directory ends with an error saying
otopi-devtools are missing. Documentation of my environment as it
stands is here:
https://github.com/leni1/oVirt-docs-Debian/blob/master/oVirt-Development-...
Any ideas on what I should try next are welcome.
--
- Warm regards
Leni Kadali Mutungi
7 years, 6 months
Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?
by Beckman, Daniel
--_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
U28gSSBzdWNjZXNzZnVsbHkgdXBncmFkZWQgbXkgZW5naW5lIGZyb20gNC4wNiB0byA0LjEuMSB3
aXRoIG5vIG1ham9yIGlzc3Vlcy4NCg0KQSBuaWNlIHRoaW5nIEkgbm90aWNlZCB3YXMgdGhhdCBt
eSBjdXN0b20gQ0EgY2VydGlmaWNhdGUgZm9yIGh0dHBzIG9uIHRoZSBhZG1pbiBhbmQgdXNlciBw
b3J0YWxzIHdhc27igJl0IGNsb2JiZXJlZCBieSBzZXR1cC4NCg0KSSBkaWQgaGF2ZSB0byByZXN0
b3JlIG15IGN1c3RvbSBzZXR0aW5ncyBmb3IgSVNPIHVwbG9hZGVyLCBsb2cgY29sbGVjdG9yLCBh
bmQgd2Vic29ja2V0IHByb3h5Og0KY3AgL2V0Yy9vdmlydC1lbmdpbmUvaXNvdXBsb2FkZXIuY29u
Zi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mLjxsYXRlc3RfdGltZXN0YW1wPiAvZXRjL292aXJ0LWVu
Z2luZS9pc291cGxvYWRlci5jb25mLmQvMTAtZW5naW5lLXNldHVwLmNvbmYNCmNwIC9ldGMvb3Zp
cnQtZW5naW5lL292aXJ0LXdlYnNvY2tldC1wcm94eS5jb25mLmQvMTAtc2V0dXAuY29uZi48bGF0
ZXN0X3RpbWVzdGFtcD4gL2V0Yy9vdmlydC1lbmdpbmUvb3ZpcnQtd2Vic29ja2V0LXByb3h5LmNv
bmYuZC8xMC1zZXR1cC5jb25mDQpjcCAvZXRjL292aXJ0LWVuZ2luZS9sb2djb2xsZWN0b3IuY29u
Zi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mLjxsYXRlc3RfdGltZXN0YW1wPiAvZXRjL292aXJ0LWVu
Z2luZS9sb2djb2xsZWN0b3IuY29uZi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mDQoNCk5vdyBJ4oCZ
bSBtb3Zpbmcgb24gdG8gdXBkYXRpbmcgdGhlIG9WaXJ0IG5vZGUgaG9zdHMsIHdoaWNoIGFyZSBj
dXJyZW50bHkgYXQgb1ZpcnQgTm9kZSA0LjAuNi4xLiAoSeKAmW0gYXNzdW1pbmcgSSBzaG91bGQg
ZG8gdGhhdCBiZWZvcmUgYXR0ZW1wdGluZyB0byB1cGdyYWRlIHRoZSBjbHVzdGVyIGFuZCBkYXRh
IGNlbnRlciBjb21wYXRpYmlsaXR5IGxldmVsIHRvIDQuMS4pDQoNCldoZW4gSSByaWdodC1jbGlj
ayBvbiBhIGhvc3QgYW5kIGdvIHRvIEluc3RhbGxhdGlvbiAvIENoZWNrIGZvciBVcGdyYWRlLCB0
aGUgcmVzdWx0cyBhcmUg4oCYbm8gdXBkYXRlcyBmb3VuZC7igJkgV2hlbiBJIGxvZyBpbnRvIHRo
YXQgaG9zdCBkaXJlY3RseSwgSSBub3RpY2UgaXTigJlzIHN0aWxsIGdvdCB0aGUgb1ZpcnQgNC4w
IHJlcG8sIG5vdCA0LjEuIElzIHRoZXJlIGFuIGV4dHJhIHN0ZXAgSeKAmW0gbWlzc2luZz8gVGhl
IGRvY3VtZW50YXRpb24gSeKAmXZlIGZvdW5kIChodHRwOi8vd3d3Lm92aXJ0Lm9yZy9kb2N1bWVu
dGF0aW9uL3VwZ3JhZGUtZ3VpZGUvY2hhcC1VcGRhdGVzX2JldHdlZW5fTWlub3JfUmVsZWFzZXMv
KSBkb2VzbuKAmXQgbWVudGlvbiB0aGlzLg0KDQoNCioqDQpJZiBJIGNhbiBvZmZlciBzb21lIHVu
c29saWNpdGVkIGZlZWRiYWNrOiBJIGZlZWwgbGlrZSB0aGlzIGxpc3QgaXMgcG9wdWxhdGVkIHdp
dGggYSBsb3Qgb2YgcXVlc3Rpb25zIHRoYXQgY291bGQgYmUgYXZlcnRlZCB3aXRoIGEgbGl0dGxl
IGNhcmUgYW5kIGZlZWRpbmcgb2YgdGhlIGRvY3VtZW50YXRpb24uIEl04oCZcyB1bmZvcnR1bmF0
ZSBiZWNhdXNlIHRoYXQgbWFrZXMgZm9yIGEgcm9ja3kgaW50cm9kdWN0aW9uIHRvIG9WaXJ0LCBh
bmQgaXQgbWFrZXMgaXQgbG9vayBsaWtlIGEgbmVnbGVjdGVkIHByb2plY3QsIHdoaWNoIEkga25v
dyBpcyBub3QgdGhlIGNhc2UuDQoNCk9uIGEgcmVsYXRlZCBub3RlLCBJIGtub3cgdGhpcyBoYXMg
YmVlbiBkaXNjdXNzZWQgYmVmb3JlIGJ1dOKApg0KVGhlIGNlbnRyYWxpemVkIGNvbnRyb2wgaW4g
R2l0aHViIGZvciB0aGUgZG9jdW1lbnRhdGlvbiBkb2VzIG5vdCByZWFsbHkgZW5jb3VyYWdlIHVz
ZXIgY29udHJpYnV0aW9ucy4gV2hhdOKAmXMgd3Jvbmcgd2l0aCBhIHdpa2k/IElmIHdl4oCZcmUg
cmVhbGx5IGNvbmNlcm5lZCBhYm91dCBiYWQgb3IgbWFsaWNpb3VzIGVkaXRzIGJlaW5nIHBvc3Rl
ZCwga2VlcCB0aGUgb2ZmaWNpYWwgaW4gZ2l0IGFuZCBhZGQgYSBzZXBhcmF0ZSB3aWtpIHRoYXQg
aXMgY2xlYXJseSBtYXJrZWQgYXMgdXNlci1jb250cmlidXRlZC4NCioqDQoNCg0KVGhhbmtzLA0K
RGFuaWVsDQo=
--_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_
Content-Type: text/html; charset=UTF-8
Content-ID: <A36A61E118072E419909BC293B56461B(a)namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8N
CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsN
CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWls
eTpDYWxpYnJpO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9y
aXR5Ojk5Ow0KCWNvbG9yOiMwNTYzQzE7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpzcGFu
LkVtYWlsU3R5bGUxNw0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1jb21wb3NlOw0KCWZvbnQt
ZmFtaWx5OkNhbGlicmk7DQoJY29sb3I6d2luZG93dGV4dDt9DQouTXNvQ2hwRGVmYXVsdA0KCXtt
c28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglmb250LWZhbWlseTpDYWxpYnJpO30NCkBwYWdl
IFdvcmRTZWN0aW9uMQ0KCXtzaXplOjguNWluIDExLjBpbjsNCgltYXJnaW46MS4waW4gMS4waW4g
MS4waW4gMS4waW47fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdlOldvcmRTZWN0aW9uMTt9DQot
LT48L3N0eWxlPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9IkVOLVVTIiBs
aW5rPSIjMDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEi
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPlNv
IEkgc3VjY2Vzc2Z1bGx5IHVwZ3JhZGVkIG15IGVuZ2luZSBmcm9tIDQuMDYgdG8gNC4xLjEgd2l0
aCBubyBtYWpvciBpc3N1ZXMuDQo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQiPkEgbmljZSB0aGluZyBJIG5vdGljZWQgd2FzIHRoYXQgbXkgY3VzdG9tIENBIGNlcnRp
ZmljYXRlIGZvciBodHRwcyBvbiB0aGUgYWRtaW4gYW5kIHVzZXIgcG9ydGFscyB3YXNu4oCZdCBj
bG9iYmVyZWQgYnkgc2V0dXAuDQo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQiPkkgZGlkIGhhdmUgdG8gcmVzdG9yZSBteSBjdXN0b20gc2V0dGluZ3MgZm9yIElTTyB1
cGxvYWRlciwgbG9nIGNvbGxlY3RvciwgYW5kIHdlYnNvY2tldCBwcm94eTo8bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx
LjBwdCI+Y3AmbmJzcDsvZXRjL292aXJ0LWVuZ2luZS9pc291cGxvYWRlci5jb25mLmQvMTAtZW5n
aW5lLXNldHVwLmNvbmYuJmx0O2xhdGVzdF90aW1lc3RhbXAmZ3Q7IC9ldGMvb3ZpcnQtZW5naW5l
L2lzb3VwbG9hZGVyLmNvbmYuZC8xMC1lbmdpbmUtc2V0dXAuY29uZjxvOnA+PC9vOnA+PC9zcGFu
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0
Ij5jcCZuYnNwOy9ldGMvb3ZpcnQtZW5naW5lL292aXJ0LXdlYnNvY2tldC1wcm94eS5jb25mLmQv
MTAtc2V0dXAuY29uZi4mbHQ7bGF0ZXN0X3RpbWVzdGFtcCZndDsgL2V0Yy9vdmlydC1lbmdpbmUv
b3ZpcnQtd2Vic29ja2V0LXByb3h5LmNvbmYuZC8xMC1zZXR1cC5jb25mPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4w
cHQiPmNwJm5ic3A7L2V0Yy9vdmlydC1lbmdpbmUvbG9nY29sbGVjdG9yLmNvbmYuZC8xMC1lbmdp
bmUtc2V0dXAuY29uZi4mbHQ7bGF0ZXN0X3RpbWVzdGFtcCZndDsgL2V0Yy9vdmlydC1lbmdpbmUv
bG9nY29sbGVjdG9yLmNvbmYuZC8xMC1lbmdpbmUtc2V0dXAuY29uZjxvOnA+PC9vOnA+PC9zcGFu
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0
Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Bh
biBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+Tm93IEnigJltIG1vdmluZyBvbiB0byB1cGRhdGlu
ZyB0aGUgb1ZpcnQgbm9kZSBob3N0cywgd2hpY2ggYXJlIGN1cnJlbnRseSBhdCBvVmlydCBOb2Rl
IDQuMC42LjEuIChJ4oCZbSBhc3N1bWluZyBJIHNob3VsZCBkbyB0aGF0IGJlZm9yZSBhdHRlbXB0
aW5nIHRvIHVwZ3JhZGUgdGhlIGNsdXN0ZXIgYW5kIGRhdGEgY2VudGVyIGNvbXBhdGliaWxpdHkg
bGV2ZWwgdG8NCiA0LjEuKTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bh
bj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBw
dCI+V2hlbiBJIHJpZ2h0LWNsaWNrIG9uIGEgaG9zdCBhbmQgZ28gdG8gSW5zdGFsbGF0aW9uIC8g
Q2hlY2sgZm9yIFVwZ3JhZGUsIHRoZSByZXN1bHRzIGFyZSDigJhubyB1cGRhdGVzIGZvdW5kLuKA
mSBXaGVuIEkgbG9nIGludG8gdGhhdCBob3N0IGRpcmVjdGx5LCBJIG5vdGljZSBpdOKAmXMgc3Rp
bGwgZ290IHRoZSBvVmlydCA0LjAgcmVwbywgbm90IDQuMS4gSXMgdGhlcmUNCiBhbiBleHRyYSBz
dGVwIEnigJltIG1pc3Npbmc/IFRoZSBkb2N1bWVudGF0aW9uIEnigJl2ZSBmb3VuZCAoPGEgaHJl
Zj0iaHR0cDovL3d3dy5vdmlydC5vcmcvZG9jdW1lbnRhdGlvbi91cGdyYWRlLWd1aWRlL2NoYXAt
VXBkYXRlc19iZXR3ZWVuX01pbm9yX1JlbGVhc2VzLykiPmh0dHA6Ly93d3cub3ZpcnQub3JnL2Rv
Y3VtZW50YXRpb24vdXBncmFkZS1ndWlkZS9jaGFwLVVwZGF0ZXNfYmV0d2Vlbl9NaW5vcl9SZWxl
YXNlcy8pPC9hPiBkb2VzbuKAmXQgbWVudGlvbg0KIHRoaXMuIDxvOnA+PC9vOnA+PC9zcGFuPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48
bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPioqPG86cD48
L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMS4wcHQiPklmIEkgY2FuIG9mZmVyIHNvbWUgdW5zb2xpY2l0ZWQgZmVlZGJhY2s6IEkg
ZmVlbCBsaWtlIHRoaXMgbGlzdCBpcyBwb3B1bGF0ZWQgd2l0aCBhIGxvdCBvZiBxdWVzdGlvbnMg
dGhhdCBjb3VsZCBiZSBhdmVydGVkIHdpdGggYSBsaXR0bGUgY2FyZSBhbmQgZmVlZGluZyBvZiB0
aGUgZG9jdW1lbnRhdGlvbi4gSXTigJlzIHVuZm9ydHVuYXRlIGJlY2F1c2UgdGhhdA0KIG1ha2Vz
IGZvciBhIHJvY2t5IGludHJvZHVjdGlvbiB0byBvVmlydCwgYW5kIGl0IG1ha2VzIGl0IGxvb2sg
bGlrZSBhIG5lZ2xlY3RlZCBwcm9qZWN0LCB3aGljaCBJIGtub3cgaXMgbm90IHRoZSBjYXNlLg0K
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5PbiBhIHJlbGF0ZWQg
bm90ZSwgSSBrbm93IHRoaXMgaGFzIGJlZW4gZGlzY3Vzc2VkIGJlZm9yZSBidXTigKY8bzpwPjwv
bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExLjBwdCI+VGhlIGNlbnRyYWxpemVkIGNvbnRyb2wgaW4gR2l0aHViIGZvciB0aGUgZG9j
dW1lbnRhdGlvbiBkb2VzIG5vdCByZWFsbHkgZW5jb3VyYWdlIHVzZXIgY29udHJpYnV0aW9ucy4g
V2hhdOKAmXMgd3Jvbmcgd2l0aCBhIHdpa2k/IElmIHdl4oCZcmUgcmVhbGx5IGNvbmNlcm5lZCBh
Ym91dCBiYWQgb3IgbWFsaWNpb3VzIGVkaXRzIGJlaW5nIHBvc3RlZCwga2VlcCB0aGUNCiBvZmZp
Y2lhbCBpbiBnaXQgYW5kIGFkZCBhIHNlcGFyYXRlIHdpa2kgdGhhdCBpcyBjbGVhcmx5IG1hcmtl
ZCBhcyB1c2VyLWNvbnRyaWJ1dGVkLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij4qKjxvOnA+PC9vOnA+PC9z
cGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEu
MHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPlRo
YW5rcyw8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdCI+RGFuaWVsPG86cD48L286cD48L3NwYW4+PC9wPg0KPC9k
aXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_--
7 years, 6 months
Hosted engine FCP SAN can not activate data domain
by Jens Oechsler
Hello
I have a problem with oVirt Hosted Engine Setup version: 4.0.5.5-1.el7.centos.
Setup is using FCP SAN for data and engine.
Cluster has worked fine for a while. It has two hosts with VMs running.
I extended storage with an additional LUN recently. This LUN seems to
be gone from data domain and one VM is paused which I assume has data
on that device.
Got these errors in events:
Apr 24, 2017 10:26:05 AM
Failed to activate Storage Domain SD (Data Center DC) by admin@internal-authz
Apr 10, 2017 3:38:08 PM
Status of host cl01 was set to Up.
Apr 10, 2017 3:38:03 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 10, 2017 3:37:58 PM
Host cl01 is initializing. Message: Recovering from crash or Initializing
Apr 10, 2017 3:37:58 PM
VDSM cl01 command failed: Recovering from crash or Initializing
Apr 10, 2017 3:37:46 PM
Failed to Reconstruct Master Domain for Data Center DC.
Apr 10, 2017 3:37:46 PM
Host cl01 is not responding. Host cannot be fenced automatically
because power management for the host is disabled.
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:32:45 PM
Invalid status on Data Center DC. Setting Data Center status to Non
Responsive (On host cl01, Error: General Exception).
Apr 10, 2017 3:32:45 PM
VDSM cl01 command failed: [Errno 19] Could not find dm device named `[unknown]`
Apr 7, 2017 1:28:04 PM
VM HostedEngine is down with error. Exit message: resource busy:
Failed to acquire lock: error -243.
Apr 7, 2017 1:28:02 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 7, 2017 1:27:59 PM
Invalid status on Data Center DC. Setting status to Non Responsive.
Apr 7, 2017 1:27:53 PM
Host cl02 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:52 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:49 PM
Affinity Rules Enforcement Manager started.
Apr 7, 2017 1:27:34 PM
ETL Service Started
Apr 7, 2017 1:26:01 PM
ETL Service Stopped
Apr 3, 2017 1:22:54 PM
Shutdown of VM HostedEngine failed.
Apr 3, 2017 1:22:52 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 3, 2017 1:22:49 PM
Invalid status on Data Center DC. Setting status to Non Responsive.
Master data domain is inactive.
vdsm.log:
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,796::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['ids']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,796::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=
0 } backup { retain_min = 50 retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/ids (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,880::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
visible (pvscan --cache).\n Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,881::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['leases']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,881::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=
0 } backup { retain_min = 50 retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/leases (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,973::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
visible (pvscan --cache).\n Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,973::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['metadata', 'leases',
'ids', 'inbox', 'outbox', 'master']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,974::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=
0 } backup { retain_min = 50 retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/metadata
bd616961-6da7-4eb0-939e-330b0a3fea6e/leases
bd616961-6da7-4eb0-939e-330b0a3fea6e/ids
bd616961-6da7-4eb0-939e-330b0a3fea6e/inbox b
d616961-6da7-4eb0-939e-330b0a3fea6e/outbox
bd616961-6da7-4eb0-939e-330b0a3fea6e/master (cwd None)
Reactor thread::INFO::2017-04-20
07:01:27,069::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ::1:44692
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
visible (pvscan --cache).\n Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::sp::662::Storage.StoragePool::(_stopWatchingDomainsState)
Stop watching domains state
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::resourceManager::628::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.58493e81-01dc-01d8-0390-000000000032'
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::647::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.58493e81-01dc-01d8-0390-000000000032' (0
active users)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::653::Storage.ResourceManager::(releaseResource)
Resource 'Storage.58493e81-01dc-01d8-0390-000000000032' is free,
finding out if anyone is waiting for it.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.58493e81-01dc-01d8-0390-000000000032', Clearing records.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::628::Storage.ResourceManager::(releaseResource)
Trying to release resource 'Storage.HsmDomainMonitorLock'
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::647::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::653::Storage.ResourceManager::(releaseResource)
Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone
is waiting for it.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.HsmDomainMonitorLock',
Clearing records.
jsonrpc.Executor/5::ERROR::2017-04-20
07:01:27,072::task::868::Storage.TaskManager.Task::(_setError)
Task=`15122a21-4fb7-45bf-9a9a-4b97f27bc1e1`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 875, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 988, in connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
File "/usr/share/vdsm/storage/hsm.py", line 1053, in _connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 646, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1219, in __rebuild
self.setMasterDomain(msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1427, in setMasterDomain
domain = sdCache.produce(msdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 101, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 125, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/blockSD.py", line 1441, in findDomain
return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))
File "/usr/share/vdsm/storage/blockSD.py", line 814, in __init__
lvm.checkVGBlockSizes(sdUUID, (self.logBlkSize, self.phyBlkSize))
File "/usr/share/vdsm/storage/lvm.py", line 1056, in checkVGBlockSizes
_checkpvsblksize(pvs, vgBlkSize)
File "/usr/share/vdsm/storage/lvm.py", line 1033, in _checkpvsblksize
pvBlkSize = _getpvblksize(pv)
File "/usr/share/vdsm/storage/lvm.py", line 1027, in _getpvblksize
dev = devicemapper.getDmId(os.path.basename(pv))
File "/usr/share/vdsm/storage/devicemapper.py", line 40, in getDmId
deviceMultipathName)
OSError: [Errno 19] Could not find dm device named `[unknown]`
Any input how to diagnose or troubleshoot would be appreciated.
--
Best Regards
Jens Oechsler
7 years, 6 months
Change DNS Server
by Kai Wagner
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx
Content-Type: multipart/mixed; boundary="R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH";
protected-headers="v1"
From: Kai Wagner <kwagner(a)suse.com>
To: users(a)ovirt.org
Message-ID: <282bf3c4-662c-c871-a495-cad2e540a311(a)suse.com>
Subject: Change DNS Server
--R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
where can I change the dns server for my hosts? I know the DNS entry is
part of the ifcfg-* file, but I tried to change it there and after a
reboot the old entry was restored from anywhere.
Kai
--=20
SUSE Linux GmbH, GF: Felix Imend=C3=B6rffer, Jane Smithard, Graham Norton=
, HRB 21284 (AG N=C3=BCrnberg)
--R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH--
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAEBAgAGBQJY9dKrAAoJEB6MkVYW48RuWuMIAIPFlOXp+ah56n3MlF7t6zJq
nlYYbS8JrIX9vo8T9+J4b0qX0RwR0g0TPxP2IzlmFZXTs3nc4r9dhOprFdS0XWKR
52z9S4Mx8BpWJ12HLAqlo5XjeXH/USS7Rc3oD+2t9ZJzA8/ljJ9pLATa/iGzDOcF
1aSHmB/cc+549xNVaOMpS4g8JJJPszmGUMYd3b2Mhquu2iAhhrLu2ZTBjvYhq6IC
6CM7PrDDVjJhi5T3MocHRamLPj6xkfAPEAKvZ3zV3frlpTptTJ4PmEzLcqZ7Tcm+
LTDq4r7gihudFvQ+oOrQKJFtOroZZ8GJ0pgLP86SrY5zlSXmFGkG3Ate64khEAc=
=Lka2
-----END PGP SIGNATURE-----
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx--
7 years, 6 months
log out event but not log in?
by Gianluca Cecchi
Hello,
In oVirt 4.1 web admin gui I see events about users logging out (they have
been created on internal domain with ovirt-aaa-jdbc-tool command), but I
don't see the corresponding log in event.
The same is true for the default admin@internal user.
Is there any reason or is it a bug?
Thanks,
Gianluca
7 years, 6 months
Hosted engine already imported
by Jamie Lawrence
Hi all,
In my hopefully-near-complete quest to automate Ovirt configuration in our environment, I’m very close. One outstanding issue that remains is that, even though the hosted engine storage domain actually was imported and shows in the GUI, some part of Ovirt appears to think that hasn’t happened yet.
In the GUI, a periodic error is logged: “The Hosted Engine Storage Domain doesn’t exist. It will be imported automatically…”
In engine.log, all I’m seeing that appears relevant is:
2017-04-25 10:28:57,988-07 INFO [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand] (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='null'}'
2017-04-25 10:28:57,992-07 WARN [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand] (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Validation of action 'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons: VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_ALREADY_EXIST
2017-04-25 10:28:57,992-07 INFO [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand] (org.ovirt.thread.pool-6-thread-9) [1e44dde0] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='null’}’
Otherwise the log is pretty clean.
I saw nothing of interest in the Gluster logs or the hosted-engine-ha logs on the host it is running on.
It appears harmless, but then we aren’t actually using these systems yet, and in any case we don’t want the error spamming the log forever. This is 4.1.1.8-1.el7.centos, hosted engine on Centos 7.6.1311, Gluster for both the hosted engine and general data domains.
Has anyone seen this before?
Thanks in advance,
-j
7 years, 6 months
Ovirt tasks "stuck"
by Jim Kusznir
Hi:
A few days ago I attempted to create a new VM from one of the
ovirt-image-repository images. I haven't really figured out how to use
this reliably yet, and in this case, while trying to import an image, one
of my nodes spontaneously rebooted (or at least, it looked like that to
ovirt...Not sure if it had an OOM issue or something else). I assume it
was the node that got the task of importing those images, as ever since
then (several days now), on my management screen under "Tasks" it shows the
attempted imports, still stuck in "processing". I'm quite certain its not
actually processing. I do believe it used some of my storage up in the
partially downloaded images, though (they do show up as
GlanceDisk-<numbers>, with a status of "Locked" under the main Disks tab.
How do I "properly" recover from this (abort the task and delete the
partial download)?
Thanks!
--Jim
7 years, 6 months
Error importing old NFS storage domain
by Endre Karlson
I have a existing NFS storage domain I would like to import.
I add the Name attribute and set the path and hit Enter but it gives me
"Error while executing actions Attach Storage Domain: Internal Engine
Error".
I checked the Engine logs too to see if there's any clue when I do it but I
cant seem to find anything. Maybe I can attach it here?
7 years, 6 months