bonding mode-alb
by Demeter Tibor
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear members,
I would like to use two switches for make high-availability network connection for my nfs storge.
Unfortunately, these switches does not support 802.3.ad lacp, (really I can't stack them) but I've read about mode-alb and mode-tlb bonding modes.
I know,these modes are available in ovirt, but how is work that? Also how is safe? Are there for HA or for load balance?
I've read some forums, where does not recommended these modes to use in ovirt. What is the truths?
I would like to use only for storage-traffic, it will be separated from other network traffic. I have two 10Gbe switches and two 10Gbe ports in my nodes.
Thanks in advance,
R
Tibor
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Dear members,</div><div><br data-mce-bogus=
=3D"1"></div><div>I would like to use two switches for make high-availabili=
ty network connection for my nfs storge.</div><div>Unfortunately, these swi=
tches does not support 802.3.ad lacp, (really I can't stack them) but=
I've read about mode-alb and mode-tlb bonding modes. <br>I know=
,these modes are available in ovirt, but how is work that? Also how is safe=
? Are there for HA or for load balance? </div><div><br data-mce-bogus=
=3D"1"></div><div>I've read some forums, where does not recommended these m=
odes to use in ovirt. What is the truths? </div><div>I would like to u=
se only for storage-traffic, it will be separated from other network traffi=
c. I have two 10Gbe switches and two 10Gbe ports in my nodes. </div><d=
iv><br data-mce-bogus=3D"1"></div><div>Thanks in advance, </div><div><=
br data-mce-bogus=3D"1"></div><div>R</div><div><br data-mce-bogus=3D"1"></d=
iv><div>Tibor</div><div> </div><div><br></div><div data-marker=3D"__SI=
G_PRE__"><p style=3D"font-family: 'Times New Roman'; font-size: medium; mar=
gin: 0px;"><strong><span style=3D"font-size: medium;"><span style=3D"color:=
rgb(45, 103, 176);"></span></span></strong></p><p></p></div></div></body><=
/html>
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2--
6 years, 11 months
Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
6 years, 11 months
oVirt Node 4.1.8 -> 4.2 upgrade
by Ed Stout
Good Morning,
Perhaps someone could clarify for me, please - to update oVirt Nodes
from 4.1.8 -> 4.2, should I just be able to yum install the 4.2
release file, then yum update? Or, do I have to start with the 4.2
ISO then update from there?
When I try the former it fails with dependency issues for the nodes, I
did however manage to update a self hosted appliance engine to 4.2. I
had a read through the mailing list archive and couldn't find similar
issues, so perhaps I'm doing something wrong.
--> Processing Dependency: lshw for package: vdsm-4.20.9.3-1.el7.centos.x86_64
--> Finished Dependency Resolution
Error: Package: ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch (ovirt-4.2)
Requires: libvirt-daemon-config-network
Error: Package: ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch (ovirt-4.2)
Requires: virt-install
Error: Package: glusterfs-geo-replication-3.12.3-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
Requires: python-prettytable
Error: Package: vdsm-4.20.9.3-1.el7.centos.x86_64 (ovirt-4.2)
Requires: lshw
Error: Package: glusterfs-events-3.12.3-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
Requires: python-prettytable
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
OS Version:
RHEL - 7 - 4.1708.el7.centos
OS Description:
oVirt Node 4.1.8
Kernel Version:
3.10.0 - 693.11.1.el7.x86_64
KVM Version:
2.9.0 - 16.el7_4.8.1
LIBVIRT Version:
libvirt-3.2.0-14.el7_4.5
VDSM Version:
vdsm-4.19.43-1.el7.centos
SPICE Version:
0.12.8 - 2.el7.1
GlusterFS Version:
glusterfs-3.8.15-2.el7
CEPH Version:
librbd1-0.94.5-2.el7
Cheers,
Ed
6 years, 11 months
4.2.0 Unable to create VM disk from VM Portal
by Vrgotic, Marko
--_000_B536AA2E93C94EFF948B2FB02AFDD203activevideocom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBvVmlydCBjb21tdW5pdHksDQoNCkkgaGF2ZSBmb2xsb3dpbmcgc2V0dXA6DQo0IHNlcnZl
cnMgYXMgZ2x1c3RlcmZzIHN0b3JhZ2U6IHByb3ZpZGluZyB2b2x1bWUgMSBmb3IgSG9zdGVkRW5n
aW5lIGFkIHZvbHVtZSAyIGZvciBWTSBTdG9yYWdlDQo3IEh5cGVydmlzb3JzDQpTZWxmSG9zdGVk
IEVuZ2luZSA0LjIuMCBjbGVhbiBpbnN0YWxsIChub3QgYW4gdXBncmFkZSkNCg0KUHJvYmxlbToN
Ck1pc3Npbmcgb3B0aW9uIHRvIGFkZC9jcmVhdGUgVk0gZGlzayBmcm9tIFZNIFBvcnRhbCAvIGV2
ZW4gYXMgQWRtaW4gdXNlci4NCkkgaGF2ZSB0cmllZCB3aXRoIEFkbWluIHN1ZXIgYW5kIGNyZWF0
aW5nIHVzZXIg4oCcVGVzdOKAnSB3dGggUG93ZXJVc2VyIHJvbGUuDQpJbiBib3RoIGNhc2VzIHRo
ZXJlIGlzIG5vdCBvcHRpb24gdG8gY3JlYXRlIG9yIGF0dGFjaCBkaXNrLg0KDQpJcyB0aGUgYXBw
cm9hY2ggY2hhbmdlZCBzaW5jZSA0LjIuMD8gQW0gSSBtaXNzaW5nIGEgc2V0dGluZz8NClBsZWFz
ZSBsZXQgbWUga25vdywgSSB3aWxsIHByb3ZpZGUgbW9yZSBkYXRhIGlmIG5lZWRlZC4NCg0KS2lu
ZGx5IGF3YWl0aW5nIHlvdXIgcmVwbHkuDQoNCi0tDQpNZXQgdnJpZW5kZWxpamtlIGdyb2V0IC8g
QmVzdCByZWdhcmRzLA0KTWFya28gVnJnb3RpYw0KU3lzdGVtIEVuZ2luZWVyL0N1c3RvbWVyIENh
cmUNCg0K
--_000_B536AA2E93C94EFF948B2FB02AFDD203activevideocom_
Content-Type: text/html; charset="utf-8"
Content-ID: <4BE36FEDD7A7564FA55EC61B84810D36(a)namprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWls
eTotd2Via2l0LXN0YW5kYXJkO30NCi8qIFN0eWxlIERlZmluaXRpb25zICovDQpwLk1zb05vcm1h
bCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdpbjowY207DQoJbWFyZ2luLWJv
dHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMi4wcHQ7DQoJZm9udC1mYW1pbHk6Q2FsaWJyaTsN
Cgltc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUzt9DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5r
DQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojMDU2M0MxOw0KCXRleHQtZGVjb3Jh
dGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJ
e21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojOTU0RjcyOw0KCXRleHQtZGVjb3JhdGlv
bjp1bmRlcmxpbmU7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTcNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29u
YWwtY29tcG9zZTsNCglmb250LWZhbWlseTpDYWxpYnJpOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0K
c3Bhbi5tc29JbnMNCgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJbXNvLXN0eWxlLW5h
bWU6IiI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTsNCgljb2xvcjp0ZWFsO30NCi5Nc29D
aHBEZWZhdWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFtaWx5OkNh
bGlicmk7DQoJbXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tVVM7fQ0KQHBhZ2UgV29yZFNlY3Rpb24x
DQoJe3NpemU6NTk1LjBwdCA4NDIuMHB0Ow0KCW1hcmdpbjo3Mi4wcHQgNzIuMHB0IDcyLjBwdCA3
Mi4wcHQ7fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdlOldvcmRTZWN0aW9uMTt9DQotLT48L3N0
eWxlPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9IkVOLUdCIiBsaW5rPSIj
MDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPkRlYXIgb1Zp
cnQgY29tbXVuaXR5LDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+
SSBoYXZlIGZvbGxvd2luZyBzZXR1cDo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+NCBzZXJ2ZXJzIGFzIGds
dXN0ZXJmcyBzdG9yYWdlOiBwcm92aWRpbmcgdm9sdW1lIDEgZm9yIEhvc3RlZEVuZ2luZSBhZCB2
b2x1bWUgMiBmb3IgVk0gU3RvcmFnZTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij43IEh5cGVydmlzb3JzIDxv
OnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTEuMHB0Ij5TZWxmSG9zdGVkIEVuZ2luZSA0LjIuMCBjbGVhbiBpbnN0YWxsIChu
b3QgYW4gdXBncmFkZSk8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQi
PlByb2JsZW06PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNw
YW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPk1pc3Npbmcgb3B0aW9uIHRvIGFkZC9jcmVhdGUg
Vk0gZGlzayBmcm9tIFZNIFBvcnRhbCAvIGV2ZW4gYXMgQWRtaW4gdXNlci48bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx
LjBwdCI+SSBoYXZlIHRyaWVkIHdpdGggQWRtaW4gc3VlciBhbmQgY3JlYXRpbmcgdXNlciDigJxU
ZXN04oCdIHd0aCBQb3dlclVzZXIgcm9sZS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+SW4gYm90aCBjYXNl
cyB0aGVyZSBpcyBub3Qgb3B0aW9uIHRvIGNyZWF0ZSBvciBhdHRhY2ggZGlzay48bzpwPjwvbzpw
Pjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPklzIHRoZSBhcHByb2FjaCBjaGFuZ2Vk
IHNpbmNlIDQuMi4wPyBBbSBJIG1pc3NpbmcgYSBzZXR0aW5nPzxvOnA+PC9vOnA+PC9zcGFuPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5Q
bGVhc2UgbGV0IG1lIGtub3csIEkgd2lsbCBwcm92aWRlIG1vcmUgZGF0YSBpZiBuZWVkZWQuPG86
cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5LaW5kbHkgYXdhaXRpbmcg
eW91ciByZXBseS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iTkwiIHN0eWxlPSJmb250LXNpemU6
MTAuNXB0O2ZvbnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNvLWZhcmVh
c3QtbGFuZ3VhZ2U6RU4tR0IiPi0tPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+PHNwYW4gbGFuZz0iTkwiIHN0eWxlPSJmb250LXNpemU6MTAuNXB0O2ZvbnQtZmFt
aWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4t
R0IiPk1ldCB2cmllbmRlbGlqa2UgZ3JvZXQgLyBCZXN0IHJlZ2FyZHMsPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJm
b250LXNpemU6MTAuNXB0O2ZvbnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7
bXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tR0IiPk1hcmtvIFZyZ290aWM8bzpwPjwvbzpwPjwvc3Bh
bj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZv
bnQtc2l6ZToxMC41cHQ7Zm9udC1mYW1pbHk6LXdlYmtpdC1zdGFuZGFyZDtjb2xvcjpibGFjaztt
c28tZmFyZWFzdC1sYW5ndWFnZTpFTi1HQiI+U3lzdGVtIEVuZ2luZWVyL0N1c3RvbWVyIENhcmU8
bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwv
bzpwPjwvcD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_B536AA2E93C94EFF948B2FB02AFDD203activevideocom_--
6 years, 11 months
oVirt NGN image customization troubles
by Giuseppe Ragusa
Hi all,
I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased rpmpersistence currently seems to have a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1528468 ) but I'm unfortunately stuck at the very beginning: it seems that I'm unable to recreate even the standard 4.1 squashfs image.
I'm following the instructions at https://gerrit.ovirt.org/gitweb?p=ovirt-node-ng.git;a=blob;f=README
I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with nested virtualization enabled).
I'm trying to work on the 4.1 branch, so I issued a:
./autogen.sh --with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
And after that I'm stuck in the "make squashfs" step: it never ends (keeps printing dots forever with no errors/warnings in log messages nor any apparent activity on the virtual disk image).
Invoking it in debug mode and connecting to the VNC console shows the detailed Plymouth startup listing stuck (latest messages displayed: "Starting udev Wait for Complete Device Initialization..." and "Starting Device-Mapper Multipath Device Controller...")
I wonder if it's actually supposed to be run only from a recent Fedora (the "dnf" reference seems a good indicator): if so, which version?
I kindly ask for advice: has anyone succeeded in modifying/reproducing NGN squash images recently? If so, how? :-)
Many thanks in advance,
Giuseppe
6 years, 11 months
10Gb Networking with OVN
by Beau Sapach
Hello everyone,
I see here: https://www.ovirt.org/blog/2017/09/introducing-ovirt-4.2.0/
that version 4.2 will have OVN support. Does anyone know if this will
allow VMs to take advantage of 10Gb networking without needing SR-IOV?
--
Beau Sapach
*System Administrator | Information Technology Services | University of
Alberta Libraries*
*Phone: 780.492.4181 | Email: Beau.Sapach(a)ualberta.ca
<Beau.Sapach(a)ualberta.ca>*
6 years, 11 months
oVirt Node 4.1 question, writing files to /root and RPMs
by Matt Simonsen
Hello all,
I read at https://www.ovirt.org/develop/projects/node/troubleshooting/
that "Changes made from the command line are done at your own risk.
Making changes has the potential to leave your system in an unusable
state." It seems clear that RPMs should not be installed.
Is this accurate for https://www.ovirt.org/node/ ?
We have added smartctl and hpacucli in order to do disk and RAID
monitoring. So far our node servers have retained changes across
reboots, which is the primary reason I'm wondering if perhaps this
applies to an older version of oVirt Node.
If what we have been doing is not supported, what is the suggested
method to do do hardware monitoring (in particular disks)?
Thanks
Matt
6 years, 11 months
[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
6 years, 12 months
Get UU
by Blaster
--Apple-Mail=_E9FF7EA0-826C-45A7-AB9C-F7262CD9645B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I need to create an old oVirt Windows VM. The VM has an old licensed =
product that we can no longer get a licensed reissued for. It looks =
like I need to feed back the original qemu UUID that was used when the =
VM was created.
I have an engine config backup and a backup of the oVirt storage domain.
I viewed the engine backup, but the format as-is doesn't give any =
obvious configuration details of the VMs at the time of the backup. Is =
the UUID as obvious as the file name of the VM's boot volume UUID name?
--Apple-Mail=_E9FF7EA0-826C-45A7-AB9C-F7262CD9645B
Content-Transfer-Encoding: 7bit
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<meta http-equiv="content-type" content="text/html; charset=utf-8" class="">
<div text="#000000" bgcolor="#FFFFFF" class=""><p class="">I need to create an old oVirt Windows VM. The VM has an old
licensed product that we can no longer get a licensed reissued
for. It looks like I need to feed back the original qemu UUID
that was used when the VM was created.</p><p class="">I have an engine config backup and a backup of the oVirt storage
domain.</p><p class="">I viewed the engine backup, but the format as-is doesn't give any
obvious configuration details of the VMs at the time of the
backup. Is the UUID as obvious as the file name of the VM's boot
volume UUID name?</p><p class=""><br class="">
</p>
</div>
</body></html>
--Apple-Mail=_E9FF7EA0-826C-45A7-AB9C-F7262CD9645B--
6 years, 12 months
Q: 2-Node Failover Setup - NFS or GlusterFS ?
by Andrei V
Hi !
I'm installing 2-node failover cluster (2 x Xeon servers with local RAID
5 / ext4 for oVirt storage domains).
Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?
4.2 Engine is running on separate hardware.
Each node have its own storage domain (on internal RAID).
All VMs must be highly available.
One of the VMs is an accounting/stock control system with FireBird SQL
server on CentOS is speed-critical.
No load balancing between nodes necessary. 2nd is just for backup if 1st
for whatever reason goes up in smoke. All VM disks must be replicated to
backup node in near real-time or in worst case each 1 - 2 hour.
GlusterFS solves this issue yet at high performance penalty.
>From what I read here
http://lists.ovirt.org/pipermail/users/2017-July/083144.html
GlusterFS performance with oVirt is not very good right now because QEMU
uses FUSE instead of libgfapi.
What is optimal way to go on ?
Thanks in advance.
Andrei
7 years