oVirt node and bonding options cutomizations
by Gianluca Cecchi
Hello,
when using CentOS as OS for hypervisor and mode=4 for bonding (802.3ad) in
the past I had to use lacp_rate=1 (aka fast), because I noticed problems
with the default lacp_rate=0 parameter.
Now I'm testing an oVirt node and I see that into the graphical interface
there are the options to set mode, link up and down delays, but not the
lacp_rate option.
It is a different environment from the one where I had to modify it, so I
will start with the default value and see if I get any problems again.
Now the ifcfg-bond0 file created at oVirt node level by the web mgmt
interface contains:
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=802.3ad"
Would I be able to modify it in case of need and have it persistent across
reboot, something like this:
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=802.3ad lacp_rate=1"
?
Thanks in advance,
Gianluca
7 years, 2 months
epel and collectd
by Fabrice Bacchella
In the releases notes, even for the 4.6 rc, I see:
https://www.ovirt.org/release/4.1.6/
...
OpsTools currently includes collectd 5.7.0, and the write_http plugin is packaged separately.
But if I check the current state:
yum list collectd-write_http collectd
...
collectd.x86_64 5.7.2-1.el7 @centos-opstools-release
collectd-write_http.x86_64 5.7.2-1.el7 @centos-opstools-release
So I think the warning is not needed any more. One can uses both ovirt and epel without any special check.
7 years, 2 months
mode 4 bonding for VMs with NetworkManager anyone?
by Gianluca Cecchi
Hello,
I'm testing ovirt node, both 4.1.5 and 4.1.6rc2.
I see that in both versions the network stack is configured with
NetworkManager enabled.
With plain CentOS I always disabled NetworkManager and only used classical
network service alone, so I have not background with NM, related to
hypervisor.
I have configured a vlan (untagged) for VMs, using bonding mode=4 (802.3ad).
I have some problems with my VMs: they are reachable from some parts and
not from others.
In parallel I have already asked network guys to crosscheck network
configurations of the datacenter, but I would like to have confirmation
that other ones are already using with success the combination of:
- NetworkManager on hypervisor
- bonding mode = 4 for VMs traffic
Another question for another aspect I never understood: with an environment
in the past (before 3.5 times if I remember well) I used 802.3ad but I had
to set lacp_rate=1 (fast) because I had connection problems with the
default lacp_rate=0 (slow) parameter.
I see that still the default is lacp_rate=0
Could it be the same this time? Is there any reason to set default
lacp_rate=0 in oVirt (or the reverse... ;-)?
Thanks in advance,
Gianluca
7 years, 2 months
ovirt high points
by david caughey
Hi Folks,
I'm giving a demo of our new 3 node oVirt deployment next week and am
looking for some high points that I can give to the Managers that will be a
sell point.
If you could help with the below questions I would really appreciate it:
Who are the big users of oVirt??
Why oVirt and not vMware??
(we are a big vMware house so free doesn't cover it)
What is the future for oVirt??
Why do you use oVirt??
Any links or ideas appreciated,
BR/David
7 years, 2 months
hyperconverged question
by Charles Kozler
Hello -
I have successfully created a hyperconverged hosted engine setup consisting
of 3 nodes - 2 for VM's and the third purely for storage. I manually
configured it all, did not use ovirt node or anything. Built the gluster
volumes myself
However, I noticed that when setting up the hosted engine and even when
adding a new storage domain with glusterfs type, it still asks for
hostname:/volumename
This leads me to believe that if that one node goes down (ex: node1:/data),
then ovirt engine wont be able to communicate with that volume because its
trying to reach it on node 1 and thus, go down
I know glusterfs fuse client can connect to all nodes to provide
failover/ha but how does the engine handle this?
7 years, 2 months
CBT question
by Demeter Tibor
--=_624f6725-18a6-44d8-b9bb-cd1010236886
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear Listmembers,
Somebody know when will be available the CBT (changed block tracking) feature in ovirt/rhev?
We looking for an usable backup solution for our ovirt guests, but I've see, there are some API limitation yet.
Thanks in advance,
R
Tibor
--=_624f6725-18a6-44d8-b9bb-cd1010236886
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000"><div>Dear Listmembers,</div><div><br data-mce-bogus="1"></div><div>Somebody know when will be available the CBT (changed block tracking) feature in ovirt/rhev?</div><div>We looking for an usable backup solution for our ovirt guests, but I've see, there are some API limitation yet.</div><div><br data-mce-bogus="1"></div><div>Thanks in advance,</div><div>R</div><div><br data-mce-bogus="1"></div><div></div><div>Tibor</div><div data-marker="__SIG_PRE__"><p></p></div></div></body></html>
--=_624f6725-18a6-44d8-b9bb-cd1010236886--
7 years, 2 months
ovirt nfs mount caused sanlock failed to access data storage
by pengyixiang
------=_Part_241661_1359064048.1505136836830
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
aGVsbG8sZXZlcnlvbmUKICAgIHNhbmxvY2sncyBsb2c6CjQyNTEyMCBUcmFjZWJhY2sgKG1vc3Qg
cmVjZW50IGNhbGwgbGFzdCk6CjQyNTEyMSAgIEZpbGUgIi91c3IvbGliL3B5dGhvbjIuNy9kaXN0
LXBhY2thZ2VzL3Zkc20vc3RvcmFnZS90YXNrLnB5IiwgbGluZSA4NzgsIGluIF9ydW4KNDI1MTIy
ICAgICByZXR1cm4gZm4oKmFyZ3MsICoqa2FyZ3MpCjQyNTEyMyAgIEZpbGUgIi91c3IvbGliL3B5
dGhvbjIuNy9kaXN0LXBhY2thZ2VzL3Zkc20vbG9nVXRpbHMucHkiLCBsaW5lIDUyLCBpbiB3cmFw
cGVyCjQyNTEyNCAgICAgcmVzID0gZigqYXJncywgKiprd2FyZ3MpCjQyNTEyNSAgIEZpbGUgIi91
c3Ivc2hhcmUvdmRzbS9zdG9yYWdlL2hzbS5weSIsIGxpbmUgNjE5LCBpbiBnZXRTcG1TdGF0dXMK
NDI1MTI2ICAgICBzdGF0dXMgPSBzZWxmLl9nZXRTcG1TdGF0dXNJbmZvKHBvb2wpCjQyNTEyNyAg
IEZpbGUgIi91c3Ivc2hhcmUvdmRzbS9zdG9yYWdlL2hzbS5weSIsIGxpbmUgNjEzLCBpbiBfZ2V0
U3BtU3RhdHVzSW5mbwo0MjUxMjggICAgIChwb29sLnNwbVJvbGUsKSArIHBvb2wuZ2V0U3BtU3Rh
dHVzKCkpKQo0MjUxMjkgICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9zcC5weSIsIGxp
bmUgMTQxLCBpbiBnZXRTcG1TdGF0dXMKNDI1MTMwICAgICByZXR1cm4gc2VsZi5fYmFja2VuZC5n
ZXRTcG1TdGF0dXMoKQo0MjUxMzEgICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9zcGJh
Y2tlbmRzLnB5IiwgbGluZSA0MzMsIGluIGdldFNwbVN0YXR1cwo0MjUxMzIgICAgIGxWZXIsIHNw
bUlkID0gc2VsZi5tYXN0ZXJEb21haW4uaW5xdWlyZUNsdXN0ZXJMb2NrKCkKNDI1MTMzICAgRmls
ZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2Uvc2QucHkiLCBsaW5lIDgxNywgaW4gaW5xdWlyZUNs
dXN0ZXJMb2NrCjQyNTEzNCAgICAgcmV0dXJuIHNlbGYuX21hbmlmZXN0LmlucXVpcmVEb21haW5M
b2NrKCkKNDI1MTM1ICAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2Uvc2QucHkiLCBsaW5l
IDUyMiwgaW4gaW5xdWlyZURvbWFpbkxvY2sKNDI1MTM2ICAgICByZXR1cm4gc2VsZi5fZG9tYWlu
TG9jay5pbnF1aXJlKHNlbGYuZ2V0RG9tYWluTGVhc2UoKSkKNDI1MTM3ICAgRmlsZSAiL3Vzci9s
aWIvcHl0aG9uMi43L2Rpc3QtcGFja2FnZXMvdmRzbS9zdG9yYWdlL2NsdXN0ZXJsb2NrLnB5Iiwg
bGluZSAzNzIsIGluIGkgICAgICAgbnF1aXJlCjQyNTEzOCAgICAgcmVzb3VyY2UgPSBzYW5sb2Nr
LnJlYWRfcmVzb3VyY2UobGVhc2UucGF0aCwgbGVhc2Uub2Zmc2V0KQo0MjUxMzkgU2FubG9ja0V4
Y2VwdGlvbjogKDEzLCAnU2FubG9jayByZXNvdXJjZSByZWFkIGZhaWx1cmUnLCAnUGVybWlzc2lv
biBkZW5pZWQnKQoKCmkgdGVzdCBpdCwgYW5kIGluIG5vZGUsSSBhZGQgdXNlciAibGlueCIgdG8g
Z3JvdXAgImt2bSIKCiQgY2F0IC9ldGMvZ3JvdXAgfCBncmVwICJrdm0iCmt2bTp4OjExMjpxZW11
LHZkc20sbGlueCxzYW5sb2NrCgoKdGhlbiBpIGNyZWF0ZSBhIGZpbGUgaW4gJEhPTUU6CiQgbHMg
LWwK19zTw8G/IDE2Ci1ydy1ydy0tLS0gMSB2ZHNtIGt2bSAgICAgNiA51MIgIDExIDIwOjA2IDEu
dHh0CmRyd3hyLXhyLXggOSBsaW54IGxpbnggNDA5NiA51MIgICAxIDE1OjU4IGxpbngtdmlydHVh
bGl6YXRpb24KZHJ3LXJ3LS0tLSAzIGxpbnggbGlueCA0MDk2IDnUwiAgMTEgMjA6MTMgdGVzdDIK
ZHJ3LXJ3LS0tLSAyIGxpbnggbGlueCA0MDk2IDnUwiAgMTEgMjA6MTkgdGVzdDMKCgp0aGVuIHdl
IGNhbiB2aWV3IHRoZSBmaWxlIGluIHVzZXIgImxpbngiOgokIGNhdCAxLnR4dApwZW5jYwoKCmxl
YXNlcyBpZiB2ZHNtOmt2bSB0b286CiQgbHMgLWwgL3JoZXYvZGF0YS1jZW50ZXIvbW50LzE5Mi4x
NjguMTEuNTVcOl9ob21lX2RhdGFTdG9yYWdlLzE4NDViZTIyLTFhYzQtNGU0Mi1iYmNiLTdiYTlj
Y2Q2ZTU2OS9kb21fbWQvbGVhc2VzCi1ydy1ydy0tLS0gMSB2ZHNtIGt2bSAyMDk3MTUyIDnUwiAg
MTEgMTk6MjEgL3JoZXYvZGF0YS1jZW50ZXIvbW50LzE5Mi4xNjguMTEuNTU6X2hvbWVfZGF0YVN0
b3JhZ2UvMTg0NWJlMjItMWFjNC00ZTQyLWJiY2ItN2JhOWNjZDZlNTY5L2RvbV9tZC9sZWFzZXMK
CgpidXQgd2UgY2Fubm90IHJlYWQgdGhlIGZpbGUgaW4gdXNlciAibGlueCI6CiQgY2F0IC9yaGV2
L2RhdGEtY2VudGVyL21udC8xOTIuMTY4LjExLjU1XDpfaG9tZV9kYXRhU3RvcmFnZS8xODQ1YmUy
Mi0xYWM0LTRlNDItYmJjYi03YmE5Y2NkNmU1NjkvZG9tX21kL2xlYXNlcwpjYXQ6ICcvcmhldi9k
YXRhLWNlbnRlci9tbnQvMTkyLjE2OC4xMS41NTpfaG9tZV9kYXRhU3RvcmFnZS8xODQ1YmUyMi0x
YWM0LTRlNDItYmJjYi03YmE5Y2NkNmU1NjkvZG9tX21kL2xlYXNlcyc6IMioz96yu7m7CgoKCndo
eSBpcyB0aGlzPyBmb2xsb3dzIHRoZSBuZnMgc2VydmVyIGNvbmZpZ3VyZQojIGNhdCAvZXRjL2V4
cG9ydHMKCi9ob21lL2RhdGFTdG9yYWdlIDE5Mi4xNjguMTEuKihydyxzeW5jKQovaG9tZS9kYXRh
U3RvcmFnZTIgMTkyLjE2OC4xMS4qKHJ3LHN5bmMsbm9fcm9vdF9zcXVhc2gsbm9fc3VidHJlZV9j
aGVjaykKL2hvbWUvaXNvU3RvcmFnZSAxOTIuMTY4LjExLioocncsc3luYyxub19yb290X3NxdWFz
aCxub19zdWJ0cmVlX2NoZWNrKQoKCgpJcyBteSBuZnMtc2VydmVyIGNvbmZpZ3VyYXRpb25zIG1p
c3Mgc29tZSBhcmd1bWVudHM/IGhhdmUgYW55IGlkZWE/CgoKCgoKCgoKCgoKCgoK
------=_Part_241661_1359064048.1505136836830
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxkaXY+aGVsbG8sZXZlcnlvbmU8L2Rpdj48ZGl2PiZuYnNwOyZu
YnNwOyZuYnNwOyBzYW5sb2NrJ3MgbG9nOjxicj40MjUxMjAgVHJhY2ViYWNrIChtb3N0IHJlY2Vu
dCBjYWxsIGxhc3QpOjxicj40MjUxMjEmbmJzcDsmbmJzcDsgRmlsZSAiL3Vzci9saWIvcHl0aG9u
Mi43L2Rpc3QtcGFja2FnZXMvdmRzbS9zdG9yYWdlL3Rhc2sucHkiLCBsaW5lIDg3OCwgaW4gX3J1
bjxicj40MjUxMjImbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgcmV0dXJuIGZuKCphcmdzLCAqKmth
cmdzKTxicj40MjUxMjMmbmJzcDsmbmJzcDsgRmlsZSAiL3Vzci9saWIvcHl0aG9uMi43L2Rpc3Qt
cGFja2FnZXMvdmRzbS9sb2dVdGlscy5weSIsIGxpbmUgNTIsIGluIHdyYXBwZXI8YnI+NDI1MTI0
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHJlcyA9IGYoKmFyZ3MsICoqa3dhcmdzKTxicj40MjUx
MjUmbmJzcDsmbmJzcDsgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2UvaHNtLnB5IiwgbGlu
ZSA2MTksIGluIGdldFNwbVN0YXR1czxicj40MjUxMjYmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
c3RhdHVzID0gc2VsZi5fZ2V0U3BtU3RhdHVzSW5mbyhwb29sKTxicj40MjUxMjcmbmJzcDsmbmJz
cDsgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2UvaHNtLnB5IiwgbGluZSA2MTMsIGluIF9n
ZXRTcG1TdGF0dXNJbmZvPGJyPjQyNTEyOCZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAocG9vbC5z
cG1Sb2xlLCkgKyBwb29sLmdldFNwbVN0YXR1cygpKSk8YnI+NDI1MTI5Jm5ic3A7Jm5ic3A7IEZp
bGUgIi91c3Ivc2hhcmUvdmRzbS9zdG9yYWdlL3NwLnB5IiwgbGluZSAxNDEsIGluIGdldFNwbVN0
YXR1czxicj40MjUxMzAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgcmV0dXJuIHNlbGYuX2JhY2tl
bmQuZ2V0U3BtU3RhdHVzKCk8YnI+NDI1MTMxJm5ic3A7Jm5ic3A7IEZpbGUgIi91c3Ivc2hhcmUv
dmRzbS9zdG9yYWdlL3NwYmFja2VuZHMucHkiLCBsaW5lIDQzMywgaW4gZ2V0U3BtU3RhdHVzPGJy
PjQyNTEzMiZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBsVmVyLCBzcG1JZCA9IHNlbGYubWFzdGVy
RG9tYWluLmlucXVpcmVDbHVzdGVyTG9jaygpPGJyPjQyNTEzMyZuYnNwOyZuYnNwOyBGaWxlICIv
dXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9zZC5weSIsIGxpbmUgODE3LCBpbiBpbnF1aXJlQ2x1c3Rl
ckxvY2s8YnI+NDI1MTM0Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHJldHVybiBzZWxmLl9tYW5p
ZmVzdC5pbnF1aXJlRG9tYWluTG9jaygpPGJyPjQyNTEzNSZuYnNwOyZuYnNwOyBGaWxlICIvdXNy
L3NoYXJlL3Zkc20vc3RvcmFnZS9zZC5weSIsIGxpbmUgNTIyLCBpbiBpbnF1aXJlRG9tYWluTG9j
azxicj40MjUxMzYmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgcmV0dXJuIHNlbGYuX2RvbWFpbkxv
Y2suaW5xdWlyZShzZWxmLmdldERvbWFpbkxlYXNlKCkpPGJyPjQyNTEzNyZuYnNwOyZuYnNwOyBG
aWxlICIvdXNyL2xpYi9weXRob24yLjcvZGlzdC1wYWNrYWdlcy92ZHNtL3N0b3JhZ2UvY2x1c3Rl
cmxvY2sucHkiLCBsaW5lIDM3MiwgaW4gaSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyBucXVpcmU8YnI+NDI1MTM4Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHJlc291cmNlID0g
c2FubG9jay5yZWFkX3Jlc291cmNlKGxlYXNlLnBhdGgsIGxlYXNlLm9mZnNldCk8YnI+NDI1MTM5
IFNhbmxvY2tFeGNlcHRpb246ICgxMywgJ1NhbmxvY2sgcmVzb3VyY2UgcmVhZCBmYWlsdXJlJywg
J1Blcm1pc3Npb24gZGVuaWVkJyk8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PmkgdGVzdCBpdCwg
YW5kIGluIG5vZGUsSSBhZGQgdXNlciAibGlueCIgdG8gZ3JvdXAgImt2bSI8YnI+PC9kaXY+PGRp
dj4kIGNhdCAvZXRjL2dyb3VwIHwgZ3JlcCAia3ZtIjxicj5rdm06eDoxMTI6cWVtdSx2ZHNtLDxz
cGFuIHN0eWxlPSJjb2xvcjogcmdiKDI1NSwgMCwgMCk7Ij5saW54PC9zcGFuPixzYW5sb2NrPC9k
aXY+PGRpdj48YnI+PC9kaXY+PGRpdj50aGVuIGkgY3JlYXRlIGEgZmlsZSBpbiAkSE9NRTo8L2Rp
dj48ZGl2PiQgbHMgLWw8YnI+19zTw8G/IDE2PGJyPi1ydy1ydy0tLS0gMSB2ZHNtIGt2bSZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyA2IDnUwiZuYnNwOyAxMSAyMDowNiAxLnR4dDxicj5kcnd4ci14
ci14IDkgbGlueCBsaW54IDQwOTYgOdTCJm5ic3A7Jm5ic3A7IDEgMTU6NTggbGlueC12aXJ0dWFs
aXphdGlvbjxicj5kcnctcnctLS0tIDMgbGlueCBsaW54IDQwOTYgOdTCJm5ic3A7IDExIDIwOjEz
IHRlc3QyPGJyPmRydy1ydy0tLS0gMiBsaW54IGxpbnggNDA5NiA51MImbmJzcDsgMTEgMjA6MTkg
dGVzdDM8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PnRoZW4gd2UgY2FuIHZpZXcgdGhlIGZpbGUg
aW4gdXNlciAibGlueCI6PC9kaXY+PGRpdj4kIGNhdCAxLnR4dCA8YnI+cGVuY2M8L2Rpdj48ZGl2
Pjxicj48L2Rpdj48ZGl2PmxlYXNlcyBpZiB2ZHNtOmt2bSB0b286PC9kaXY+PGRpdj4kIGxzIC1s
IC9yaGV2L2RhdGEtY2VudGVyL21udC8xOTIuMTY4LjExLjU1XDpfaG9tZV9kYXRhU3RvcmFnZS8x
ODQ1YmUyMi0xYWM0LTRlNDItYmJjYi03YmE5Y2NkNmU1NjkvZG9tX21kL2xlYXNlcyA8YnI+LXJ3
LXJ3LS0tLSAxIHZkc20ga3ZtIDIwOTcxNTIgOdTCJm5ic3A7IDExIDE5OjIxIC9yaGV2L2RhdGEt
Y2VudGVyL21udC8xOTIuMTY4LjExLjU1Ol9ob21lX2RhdGFTdG9yYWdlLzE4NDViZTIyLTFhYzQt
NGU0Mi1iYmNiLTdiYTljY2Q2ZTU2OS9kb21fbWQvbGVhc2VzPC9kaXY+PGRpdj48YnI+PC9kaXY+
PGRpdj5idXQgd2UgY2Fubm90IHJlYWQgdGhlIGZpbGUgaW4gdXNlciAibGlueCI6PC9kaXY+PGRp
dj4kIGNhdCAvcmhldi9kYXRhLWNlbnRlci9tbnQvMTkyLjE2OC4xMS41NVw6X2hvbWVfZGF0YVN0
b3JhZ2UvMTg0NWJlMjItMWFjNC00ZTQyLWJiY2ItN2JhOWNjZDZlNTY5L2RvbV9tZC9sZWFzZXM8
YnI+Y2F0OiAnL3JoZXYvZGF0YS1jZW50ZXIvbW50LzE5Mi4xNjguMTEuNTU6X2hvbWVfZGF0YVN0
b3JhZ2UvMTg0NWJlMjItMWFjNC00ZTQyLWJiY2ItN2JhOWNjZDZlNTY5L2RvbV9tZC9sZWFzZXMn
OiDIqM/esru5uzxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PndoeSBpcyB0aGlzPyBmb2xs
b3dzIHRoZSBuZnMgc2VydmVyIGNvbmZpZ3VyZTwvZGl2PjxkaXY+IyBjYXQgL2V0Yy9leHBvcnRz
IDxicj48YnI+L2hvbWUvZGF0YVN0b3JhZ2UgMTkyLjE2OC4xMS4qKHJ3LHN5bmMpPGJyPi9ob21l
L2RhdGFTdG9yYWdlMiAxOTIuMTY4LjExLioocncsc3luYyxub19yb290X3NxdWFzaCxub19zdWJ0
cmVlX2NoZWNrKTxicj4vaG9tZS9pc29TdG9yYWdlIDE5Mi4xNjguMTEuKihydyxzeW5jLG5vX3Jv
b3Rfc3F1YXNoLG5vX3N1YnRyZWVfY2hlY2spPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+
SXMgbXkgbmZzLXNlcnZlciBjb25maWd1cmF0aW9ucyBtaXNzIHNvbWUgYXJndW1lbnRzPyBoYXZl
IGFueSBpZGVhPzxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxi
cj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
Pjxicj48L2Rpdj48L2Rpdj48YnI+PGJyPjxzcGFuIHRpdGxlPSJuZXRlYXNlZm9vdGVyIj48cD4m
bmJzcDs8L3A+PC9zcGFuPg==
------=_Part_241661_1359064048.1505136836830--
7 years, 2 months
Automatic snapshot
by Lionel Caignec
Hi,
I study the snapshot possibillities in ovirt.
I would like to make a cron running every day which:
- retrieve vm list from a specific cluster
- create a new snapshot (without memory) for every vm
- removes oldest snapshot to keep only 5.
I'm wondering what is the best approach to do that?
use a list of command in batch for "ovirt-shell", scripting a bit with python, using the api ovirt? Unless there is already something to find something.
Thank you for helping
--
Lionel
7 years, 2 months
More than one mgmt network possible?
by Gianluca Cecchi
Hello,
in site1 I have 2 oVirt hosts with ovirtmgmt configured on vlan167.
Now I want to add a server that is in site2 where this vlan doesn't arrive.
I have here a vlan 169 that does routing with the vlan 167 of site1.
Can I add the host into the same cluster or the only way is to "transport"
vlan167 into site2 too?
Thanks,
Gianluca
7 years, 2 months
Re: [ovirt-users] Native Access on gluster storage domain
by Stefano Danzi
This is a multi-part message in MIME format.
--------------FD20035AA480B5A87ADF5A8D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Your suggestion solved the problem.
In the UI relative flag still missing, but now VMs are using gfapi.
Il 11/09/2017 05:23, Sahina Bose ha scritto:
> You could try to enable the config option for the 4.1 cluster level -
> using engine-config tool from the Hosted Engine VM. This will require
> a restart of the engine service and will enable gfapi access for all
> clusters at 4.1 level though - so try this option if this is acceptable.
>
> On Wed, Aug 30, 2017 at 8:02 PM, Stefano Danzi <s.danzi(a)hawai.it
> <mailto:s.danzi@hawai.it>> wrote:
>
> above the logs.
> PS cluster compatibility level is 4.1
>
> engine:
>
> 2017-08-30 16:26:07,928+02 INFO
> [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
> [56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
> 2017-08-30 16:26:07,951+02 WARN
> [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
> [56d090c5-1097-4641-b745-74af8397d945] Validation of action
> 'UpdateCluster' failed for user admin@internal. Reasons:
> VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
> 2017-08-30 16:26:07,952+02 INFO
> [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
> [56d090c5-1097-4641-b745-74af8397d945] Lock freed to object
> 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
>
> vdsm:
>
> 2017-08-30 16:29:23,310+0200 INFO (jsonrpc/0)
> [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
> 0.15 seconds (__init__:539)
> 2017-08-30 16:29:23,419+0200 INFO (jsonrpc/4)
> [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in
> 0.01 seconds (__init__:539)
> 2017-08-30 16:29:23,424+0200 INFO (jsonrpc/3)
> [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies
> succeeded in 0.00 seconds (__init__:539)
> 2017-08-30 16:29:23,814+0200 INFO (jsonrpc/5)
> [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
> 0.15 seconds (__init__:539)
> 2017-08-30 16:29:24,011+0200 INFO (Reactor thread)
> [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:51862
> (protocoldetector:72)
> 2017-08-30 16:29:24,023+0200 INFO (Reactor thread)
> [ProtocolDetector.Detector] Detected protocol stomp from ::1:51862
> (protocoldetector:127)
> 2017-08-30 16:29:24,024+0200 INFO (Reactor thread)
> [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
> 2017-08-30 16:29:24,031+0200 INFO (JsonRpc (StompReactor))
> [Broker.StompAdapter] Subscribe command received (stompreactor:130)
> 2017-08-30 16:29:24,287+0200 INFO (jsonrpc/2)
> [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in
> 0.01 seconds (__init__:539)
> 2017-08-30 16:29:24,443+0200 INFO (jsonrpc/7) [vdsm.api] START
> getSpmStatus(spUUID=u'00000002-0002-0002-0002-0000000001ef',
> options=None) from=::ffff:192.168.1.55,46502, flow_id=1f664a9,
> task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
> 2017-08-30 16:29:24,446+0200 INFO (jsonrpc/7) [vdsm.api] FINISH
> getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM',
> 'spmLver': 1430L}} from=::ffff:192.168.1.55,46502,
> flow_id=1f664a9, task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
> 2017-08-30 16:29:24,447+0200 INFO (jsonrpc/7)
> [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus
> succeeded in 0.00 seconds (__init__:539)
> 2017-08-30 16:29:24,460+0200 INFO (jsonrpc/6)
> [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
> 0.16 seconds (__init__:539)
> 2017-08-30 16:29:24,467+0200 INFO (jsonrpc/1) [vdsm.api] START
> getStoragePoolInfo(spUUID=u'00000002-0002-0002-0002-0000000001ef',
> options=None) from=::ffff:192.168.1.55,46506, flow_id=1f664a9,
> task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)
>
>
> Il 30/08/2017 16:06, Shani Leviim ha scritto:
>> Hi Stefano,
>> Can you please attach your engine and vdsm logs?
>>
>> *Regards,
>> *
>> *Shani Leviim
>> *
>>
>> On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi <s.danzi(a)hawai.it
>> <mailto:s.danzi@hawai.it>> wrote:
>>
>> Hello,
>> I have a test environment with a sigle host and self hosted
>> engine running oVirt Engine: 4.1.5.2-1.el7.centos
>>
>> I what to try the option "Native Access on gluster storage
>> domain" but I get an error because I have to put the
>> host in maintenance mode. I can't do that because I have a
>> single host so the hosted engine can't be migrated.
>>
>> There are a way to change this option but apply it at next
>> reboot?
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>> <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
--------------FD20035AA480B5A87ADF5A8D
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Your suggestion solved the problem.<br>
In the UI relative flag still missing, but now VMs are using gfapi.<br>
<br>
<div class="moz-cite-prefix">Il 11/09/2017 05:23, Sahina Bose ha
scritto:<br>
</div>
<blockquote type="cite"
cite="mid:CACjzOvfs6hdhT52ThXG5esacViLeqNw=1xHtC8DQyy_xBnqUQQ@mail.gmail.com">
<div dir="ltr">You could try to enable the config option for the
4.1 cluster level - using engine-config tool from the Hosted
Engine VM. This will require a restart of the engine service and
will enable gfapi access for all clusters at 4.1 level though -
so try this option if this is acceptable.<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Aug 30, 2017 at 8:02 PM,
Stefano Danzi <span dir="ltr"><<a
href="mailto:s.danzi@hawai.it" target="_blank"
moz-do-not-send="true">s.danzi(a)hawai.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> above the logs.<br>
PS cluster compatibility level is 4.1<br>
<br>
engine:<br>
<br>
2017-08-30 16:26:07,928+02 INFO
[org.ovirt.engine.core.bll.<wbr>UpdateClusterCommand]
(default task-8) [56d090c5-1097-4641-b745-<wbr>74af8397d945]
Lock Acquired to object 'EngineLock:{exclusiveLocks='[<wbr>]',
sharedLocks='[]'}'<br>
2017-08-30 16:26:07,951+02 WARN
[org.ovirt.engine.core.bll.<wbr>UpdateClusterCommand]
(default task-8) [56d090c5-1097-4641-b745-<wbr>74af8397d945]
Validation of action 'UpdateCluster' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__<wbr>ACTION__UPDATE,CLUSTER_CANNOT_<wbr>UPDATE_SUPPORTED_FEATURES_<wbr>WITH_LOWER_HOSTS<br>
2017-08-30 16:26:07,952+02 INFO
[org.ovirt.engine.core.bll.<wbr>UpdateClusterCommand]
(default task-8) [56d090c5-1097-4641-b745-<wbr>74af8397d945]
Lock freed to object 'EngineLock:{exclusiveLocks='[<wbr>]',
sharedLocks='[]'}'<br>
<br>
vdsm:<br>
<br>
2017-08-30 16:29:23,310+0200 INFO (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list
succeeded in 0.15 seconds (__init__:539)<br>
2017-08-30 16:29:23,419+0200 INFO (jsonrpc/4)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats
succeeded in 0.01 seconds (__init__:539)<br>
2017-08-30 16:29:23,424+0200 INFO (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds
(__init__:539)<br>
2017-08-30 16:29:23,814+0200 INFO (jsonrpc/5)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list
succeeded in 0.15 seconds (__init__:539)<br>
2017-08-30 16:29:24,011+0200 INFO (Reactor thread)
[ProtocolDetector.<wbr>AcceptorImpl] Accepted connection
from ::1:51862 (protocoldetector:72)<br>
2017-08-30 16:29:24,023+0200 INFO (Reactor thread)
[ProtocolDetector.Detector] Detected protocol stomp from
::1:51862 (protocoldetector:127)<br>
2017-08-30 16:29:24,024+0200 INFO (Reactor thread)
[Broker.StompAdapter] Processing CONNECT request
(stompreactor:103)<br>
2017-08-30 16:29:24,031+0200 INFO (JsonRpc
(StompReactor)) [Broker.StompAdapter] Subscribe command
received (stompreactor:130)<br>
2017-08-30 16:29:24,287+0200 INFO (jsonrpc/2)
[jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo
succeeded in 0.01 seconds (__init__:539)<br>
2017-08-30 16:29:24,443+0200 INFO (jsonrpc/7) [vdsm.api]
START getSpmStatus(spUUID=u'<wbr>00000002-0002-0002-0002-<wbr>0000000001ef',
options=None) from=::ffff:192.168.1.55,<wbr>46502,
flow_id=1f664a9, task_id=c856903a-0af1-4c0c-<wbr>8a44-7971fee7dffa
(api:46)<br>
2017-08-30 16:29:24,446+0200 INFO (jsonrpc/7) [vdsm.api]
FINISH getSpmStatus return={'spm_st': {'spmId': 1,
'spmStatus': 'SPM', 'spmLver': 1430L}}
from=::ffff:192.168.1.55,<wbr>46502, flow_id=1f664a9,
task_id=c856903a-0af1-4c0c-<wbr>8a44-7971fee7dffa (api:52)<br>
2017-08-30 16:29:24,447+0200 INFO (jsonrpc/7)
[jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus
succeeded in 0.00 seconds (__init__:539)<br>
2017-08-30 16:29:24,460+0200 INFO (jsonrpc/6)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list
succeeded in 0.16 seconds (__init__:539)<br>
2017-08-30 16:29:24,467+0200 INFO (jsonrpc/1) [vdsm.api]
START getStoragePoolInfo(spUUID=u'<wbr>00000002-0002-0002-0002-<wbr>0000000001ef',
options=None) from=::ffff:192.168.1.55,<wbr>46506,
flow_id=1f664a9, task_id=029ec55e-9c47-4a20-<wbr>be44-8c80fd1fd5ac
(api:46)
<div>
<div class="h5"><br>
<br>
<div class="m_1878010931131115540moz-cite-prefix">Il
30/08/2017 16:06, Shani Leviim ha scritto:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_default">Hi Stefano,</div>
<div class="gmail_default">Can you please attach
your engine and vdsm logs?<br>
</div>
</div>
<div class="gmail_extra"><br>
<div>
<div
class="m_1878010931131115540gmail_signature"
data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div><span><b>Regards,<br>
</b></span></div>
<span><b>Shani Leviim<br>
</b></span></div>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">On Wed, Aug 30, 2017 at
12:46 PM, Stefano Danzi <span dir="ltr"><<a
href="mailto:s.danzi@hawai.it"
target="_blank" moz-do-not-send="true">s.danzi(a)hawai.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote">
<div> Hello, <br>
I have a test environment with a sigle host
and self hosted engine running <span
class="m_1878010931131115540m_-3427634671303477199gwt-InlineLabel">oVirt
Engine: 4.1.5.2-1.el7.centos<br>
<br>
I what to try the option "Native Access on
gluster storage domain" but I get an error
because I have to put the <br>
host in maintenance mode. I can't do that
because I have a single host so the hosted
engine can't be migrated.<br>
<br>
There are a way to change this option but
apply it at next reboot?<br>
</span> </div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org"
target="_blank" moz-do-not-send="true">Users(a)ovirt.org</a><br>
<a
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank"
moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" moz-do-not-send="true">Users(a)ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------FD20035AA480B5A87ADF5A8D--
7 years, 2 months