PHX planned maintenance 29.01.2018 @19:00 GMT
by Evgheni Dereveanchin
Hi everyone,
Planned works are going to be performed in the PHX data center today. They
are related to core networking hardware upgrades and may cause short
connectivity loss to services hosted by the oVirt project, including:
* Jenkins CI
* Package repositories
* project website
* mailing lists
The maintenance window starts at 19:00 GMT and has a duration of one hour.
I will pause CI job execution before the maintenance window so builds will
be queued and executed after network upgrades are implemented to decrease
the possibility of false positives. I will follow up with any further
updates.
--
Regards,
Evgheni Dereveanchin
6 years, 9 months
NFS sync vs async
by Jean-Francois Courteau
Hello there,
At first I thought I had a performance problem with virtio-scsi on
Windows, but after thorough experimentation, I finally found that my
performance problem was related to the way I share my storage using NFS.
Using the settings suggested on the oVirt website for the /etc/exports
file, I implemented the following line:
/storage
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
The underlying filesystem is ext4.
In the end, whatever the VM I am running through this NFS export, I get
extremely poor write performance, like sub-100 IOPS (my disks usually
can do 800-1k). Under the hood, iotop shows that my host IO is all taken
up by jbd2, and if I understand correctly, it is the ext4 logging
process.
I have read that using the "async" option in my NFS export is unsafe,
like if my host crashes during a write operation, it could corrupt my VM
Disks.
What is the best combination of filesystem / settings if I want to go
with NFS sync? Is someone getting good performance with the same options
as me? if so, why do I get such abysmal IOPS numbers?
Thanks!
J-F Courteau
6 years, 9 months
Q: Core Reservation on Hyperthreaded CPU
by Andrei V
Hi,
I’m currently running node on HP ProLiant with 2 x Xeon with 4 core each.
Because of hyperthreading each physical core is being seen as 2.
How KVM/oVirt reserves cores if for example I allocate 4 CPU cores for VM ?
Does it allocates 4 real CPU cores, or 2 cores with 2 threads each ?
Each VM consumes very low CPU resources.
What happens if # of used cores consumed by VMs will exceed real count of physical CPU cores ?
Thanks.
Andrei
6 years, 9 months
disk copy exception
by Donny Davis
I am trying to copy a disk that was from a storage domain that was imported
from 4.1 to a newer storage domain and this exception is thrown in the UI
Uncaught exception occurred. Please try reloading the page. Details:
Exception caught: (TypeError) : Cannot read property 'g' of null
Please have your administrator check the UI logs
6 years, 9 months
Basic parts for rebuilding ovirt server
by Alex Bartonek
This is a multi-part message in MIME format.
--b1_f894a230c95068ff2d09e33adb4dda21
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
S2luZGEgcGxlYXNlZCB3aXRoIHRoZSB3YXkgdGhlIHJlc3RvcmUgd2VudC4gU3Vja3MgdGhhdCBt
eSBiYWNrdXBzIHdlcmVuJ3Qgd29ya2luZyBsaWtlIEkgdGhvdWdodC4KCkkgZGlkIGFuIGVuZ2lu
ZSBiYWNrdXAsIGNvbXBsZXRlIGFib3V0IGEgZGF5IGJlZm9yZSBJIGxvc3QgZGF0YS4KCkFyZSB0
aGVzZSB0aGUgYmFzaWMgc3RlcHM6CgoxLiBSZWluc3RhbGwgQ2VudE9TICh3aGF0ZXZlciBPUyB5
b3UgdXNlZCkKMi4gVXBkYXRlIGFsbCBwYWNrYWdlcwozLiBJbnN0YWxsIG92aXJ0CjQuIEVuZ2lu
ZSByZXN0b3JlCjUuIEVuZ2luZSBzZXR1cAoKVGhhdCBnZXRzIHlvdSBhIGJhc2ljIG92aXJ0IHNl
cnZlciB3aXRoIHlvdXIgcHJldmlvdXMgc2V0dGluZ3MuCgpNeSBwcm9ibGVtOgpTdG9yYWdlIGRv
bWFpbnMgYXJlIGdvbmUgYWxvbmcgd2l0aCB0aGUgdm1zLiBIb3cgZG8gSSByZW1vdmUgdGhlIFZN
J3MgYW5kIHN0b3JhZ2UgZG9tYWlucyB0aGF0IHdheSBJIGNhbiByZWNyZWF0ZSB0aGVtPwoKVGhh
bmtzCgpTZW50IGZyb20gUHJvdG9uTWFpbCBtb2JpbGU=
--b1_f894a230c95068ff2d09e33adb4dda21
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGJyPktpbmRhIHBsZWFzZWQgd2l0aCB0aGUgd2F5IHRoZSByZXN0b3JlIHdlbnQuICBTdWNrcyB0
aGF0IG15IGJhY2t1cHMgd2VyZW4ndCB3b3JraW5nIGxpa2UgSSB0aG91Z2h0Ljxicj48YnI+SSBk
aWQgYW4gZW5naW5lIGJhY2t1cCwgY29tcGxldGUgYWJvdXQgYSBkYXkgYmVmb3JlIEkgbG9zdCBk
YXRhLjxicj48YnI+QXJlIHRoZXNlIHRoZSBiYXNpYyBzdGVwczo8YnI+PGJyPjEuIFJlaW5zdGFs
bCBDZW50T1MgKHdoYXRldmVyIE9TIHlvdSB1c2VkKTxicj4yLiBVcGRhdGUgYWxsIHBhY2thZ2Vz
PGJyPjMuIEluc3RhbGwgb3ZpcnQ8YnI+NC4gRW5naW5lIHJlc3RvcmUgPGJyPjUuIEVuZ2luZSBz
ZXR1cDxicj48YnI+VGhhdCBnZXRzIHlvdSBhIGJhc2ljIG92aXJ0IHNlcnZlciB3aXRoIHlvdXIg
cHJldmlvdXMgc2V0dGluZ3MuPGJyPjxicj5NeSBwcm9ibGVtOjxicj5TdG9yYWdlIGRvbWFpbnMg
YXJlIGdvbmUgYWxvbmcgd2l0aCB0aGUgdm1zLiAgSG93IGRvIEkgcmVtb3ZlIHRoZSBWTSdzIGFu
ZCBzdG9yYWdlIGRvbWFpbnMgdGhhdCB3YXkgSSBjYW4gcmVjcmVhdGUgdGhlbT88YnI+PGJyPlRo
YW5rczxicj48YnI+U2VudCBmcm9tIFByb3Rvbk1haWwgbW9iaWxlPGJyPjxicj48YnI+
--b1_f894a230c95068ff2d09e33adb4dda21--
6 years, 10 months
Bad hd? or?
by Alex Bartonek
This is a multi-part message in MIME format.
--b1_10225c10029a5ba476cd62c721026aac
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SSdtIHN0dW1wZWQuCgpJIHBvd2VyY3ljbGVkIG15IHNlcnZlciBvbiBhY2NpZGVudCBhbmQgSSBj
YW5ub3QgbW91bnQgbXkgZGF0YSBkcml2ZS4gIEkgd2FzIGdldHRpbmcgYnVmZmVyIGkvbyBlcnJv
cnMgYnV0IGZpbmFsbHkgd2FzIGFibGUgdG8gYm9vdCB1cCBieSBkaXNhYmxpbmcgYXV0b21vdW50
IGluIGZzdGFiLgoKSSBjYW5ub3QgbW91bnQgbXkgZXh0NCBkcml2ZS4gICBBbnl0aGluZyBlbHNl
IEkgY2FuIGNoZWNrPwoKcm9vdEBibGl0emVuIHRdIyBkbWVzZ3xncmVwIHNkYgpbICAgIDEuNzE0
MTM4XSBzZCAyOjE6MDoxOiBbc2RiXSA1ODU4NzE5NjQgNTEyLWJ5dGUgbG9naWNhbCBibG9ja3M6
ICgyOTkgR0IvMjc5IEdpQikKWyAgICAxLjcxNDI3NV0gc2QgMjoxOjA6MTogW3NkYl0gV3JpdGUg
UHJvdGVjdCBpcyBvZmYKWyAgICAxLjcxNDI3OV0gc2QgMjoxOjA6MTogW3NkYl0gTW9kZSBTZW5z
ZTogNmIgMDAgMDAgMDgKWyAgICAxLjcxNDYyM10gc2QgMjoxOjA6MTogW3NkYl0gV3JpdGUgY2Fj
aGU6IGRpc2FibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9y
IEZVQQpbICAgIDEuNzUwNDAwXSAgc2RiOiBzZGIxClsgICAgMS43NTA5NjldIHNkIDI6MTowOjE6
IFtzZGJdIEF0dGFjaGVkIFNDU0kgZGlzawpbICA0NDMuOTM2Nzk0XSAgc2RiOiBzZGIxClsgIDQ1
Mi41MTk0ODJdICBzZGI6IHNkYjEKCnNkYiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDg6MTYgICAwIDI3OS40RyAgMCBkaXNrCuKUnOKUgHNkYjEgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA4OjE3ICAgMCAyNzkuNEcgIDAgcGFydArilJTilIAzNjAwNTA4
YjEwMDEwMzQzMTMyMjAyMDIwMjAyMDAwMGEgICAgIDI1MzozICAgIDAgMjc5LjRHICAwIG1wYXRo
CiAg4pSU4pSAMzYwMDUwOGIxMDAxMDM0MzEzMjIwMjAyMDIwMjAwMDBhMSAgMjUzOjQgICAgMCAy
NzkuNEcgIDAgcGFydAoKW3Jvb3RAYmxpdHplbiB0XSMgbW91bnQgL2Rldi9zZGIxIC9tbnQvb3Zp
cnRfZGF0YS8KbW91bnQ6IC9kZXYvc2RiMSBpcyBhbHJlYWR5IG1vdW50ZWQgb3IgL21udC9vdmly
dF9kYXRhIGJ1c3kKW3Jvb3RAYmxpdHplbiB0XSMgbW91bnR8Z3JlcCBzZGIKW3Jvb3RAYmxpdHpl
biB0XSMKClRoYW5rcyBpbiBhZHZhbmNlLg==
--b1_10225c10029a5ba476cd62c721026aac
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdj5JJ20gc3R1bXBlZC48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JIHBvd2VyY3lj
bGVkIG15IHNlcnZlciBvbiBhY2NpZGVudCBhbmQgSSBjYW5ub3QgbW91bnQgbXkgZGF0YSBkcml2
ZS4mbmJzcDsgSSB3YXMgZ2V0dGluZyBidWZmZXIgaS9vIGVycm9ycyBidXQgZmluYWxseSB3YXMg
YWJsZSB0byBib290IHVwIGJ5IGRpc2FibGluZyBhdXRvbW91bnQgaW4gZnN0YWIuPGJyPjwvZGl2
PjxkaXY+PGJyPjwvZGl2PjxkaXY+SSBjYW5ub3QgbW91bnQgbXkgZXh0NCBkcml2ZS4mbmJzcDsm
bmJzcDsgQW55dGhpbmcgZWxzZSBJIGNhbiBjaGVjaz88YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+
PGRpdj5yb290QGJsaXR6ZW4gdF0jIGRtZXNnfGdyZXAgc2RiPGJyPjwvZGl2PjxkaXY+WyZuYnNw
OyZuYnNwOyZuYnNwOyAxLjcxNDEzOF0gc2QgMjoxOjA6MTogW3NkYl0gNTg1ODcxOTY0IDUxMi1i
eXRlIGxvZ2ljYWwgYmxvY2tzOiAoMjk5IEdCLzI3OSBHaUIpPGJyPjwvZGl2PjxkaXY+WyZuYnNw
OyZuYnNwOyZuYnNwOyAxLjcxNDI3NV0gc2QgMjoxOjA6MTogW3NkYl0gV3JpdGUgUHJvdGVjdCBp
cyBvZmY8YnI+PC9kaXY+PGRpdj5bJm5ic3A7Jm5ic3A7Jm5ic3A7IDEuNzE0Mjc5XSBzZCAyOjE6
MDoxOiBbc2RiXSBNb2RlIFNlbnNlOiA2YiAwMCAwMCAwODxicj48L2Rpdj48ZGl2PlsmbmJzcDsm
bmJzcDsmbmJzcDsgMS43MTQ2MjNdIHNkIDI6MTowOjE6IFtzZGJdIFdyaXRlIGNhY2hlOiBkaXNh
YmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBwb3J0IERQTyBvciBGVUE8YnI+
PC9kaXY+PGRpdj5bJm5ic3A7Jm5ic3A7Jm5ic3A7IDEuNzUwNDAwXSZuYnNwOyBzZGI6IHNkYjE8
YnI+PC9kaXY+PGRpdj5bJm5ic3A7Jm5ic3A7Jm5ic3A7IDEuNzUwOTY5XSBzZCAyOjE6MDoxOiBb
c2RiXSBBdHRhY2hlZCBTQ1NJIGRpc2s8YnI+PC9kaXY+PGRpdj5bJm5ic3A7IDQ0My45MzY3OTRd
Jm5ic3A7IHNkYjogc2RiMTxicj48L2Rpdj48ZGl2PlsmbmJzcDsgNDUyLjUxOTQ4Ml0mbmJzcDsg
c2RiOiBzZGIxPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+c2RiJm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IDg6MTYmbmJzcDsmbmJz
cDsgMCAyNzkuNEcmbmJzcDsgMCBkaXNrJm5ic3A7PGJyPjwvZGl2PjxkaXY+4pSc4pSAc2RiMSZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyA4OjE3Jm5ic3A7Jm5ic3A7IDAg
Mjc5LjRHJm5ic3A7IDAgcGFydCZuYnNwOzxicj48L2Rpdj48ZGl2PuKUlOKUgDM2MDA1MDhiMTAw
MTAzNDMxMzIyMDIwMjAyMDIwMDAwYSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAyNTM6MyZuYnNw
OyZuYnNwOyZuYnNwOyAwIDI3OS40RyZuYnNwOyAwIG1wYXRoPGJyPjwvZGl2PjxkaXY+Jm5ic3A7
IOKUlOKUgDM2MDA1MDhiMTAwMTAzNDMxMzIyMDIwMjAyMDIwMDAwYTEmbmJzcDsgMjUzOjQmbmJz
cDsmbmJzcDsmbmJzcDsgMCAyNzkuNEcmbmJzcDsgMCBwYXJ0Jm5ic3A7PGJyPjwvZGl2PjxkaXY+
PGJyPjwvZGl2PjxkaXY+W3Jvb3RAYmxpdHplbiB0XSMgbW91bnQgL2Rldi9zZGIxIC9tbnQvb3Zp
cnRfZGF0YS88YnI+PC9kaXY+PGRpdj5tb3VudDogL2Rldi9zZGIxIGlzIGFscmVhZHkgbW91bnRl
ZCBvciAvbW50L292aXJ0X2RhdGEgYnVzeTxicj48L2Rpdj48ZGl2Pltyb290QGJsaXR6ZW4gdF0j
IG1vdW50fGdyZXAgc2RiPGJyPjwvZGl2PjxkaXY+W3Jvb3RAYmxpdHplbiB0XSM8YnI+PC9kaXY+
PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+VGhhbmtzIGluIGFkdmFuY2UuPGJyPjxkaXY+
PGJyPjwvZGl2Pg==
--b1_10225c10029a5ba476cd62c721026aac--
6 years, 10 months
Has meltdown impacted glusterFS performance?
by Jayme
I've been considering hyperconverged oVirt setup VS san/nas but I wonder
how the meltdown patches have affected glusterFS performance since it is
CPU intensive. Has anyone who has applied recent kernel updates noticed a
performance drop with glusterFS?
6 years, 10 months
oVirt 4.1.9 and Spectre-Meltdown checks
by Gianluca Cecchi
Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
and hosts, as detailed in release notes:
I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and one
oVirt host to 4.1.9.
Now in General -> Software subtab of the host I see:
OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
Kernel Features: IBRS: 0, PTI: 1, IBPB: 0
Am I supposed to manually set any particular value?
If I run version 0.32 (updated yesterday) of spectre-meltdown-checker.sh I
got this on my Dell M610 blade with
Version: 6.4.0
Release Date: 07/18/2013
[root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.32
Checking for vulnerabilities on current system
Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64
CPU is Intel(R) Xeon(R) CPU X5690 @ 3.47GHz
Hardware check
* Hardware support (CPU microcode) for mitigation techniques
* Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available: NO
* CPU indicates IBRS capability: NO
* Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available: NO
* CPU indicates IBPB capability: NO
* Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available: NO
* CPU indicates STIBP capability: NO
* Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability: NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability: NO
* CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
* Vulnerable to Variant 1: YES
* Vulnerable to Variant 2: YES
* Vulnerable to Variant 3: YES
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel: YES
> STATUS: NOT VULNERABLE (107 opcodes found, which is >= 70, heuristic to
be improved when official patches become available)
CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
* Kernel is compiled with IBRS/IBPB support: YES
* Currently enabled features
* IBRS enabled for Kernel space: NO (echo 1 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBRS enabled for User space: NO (echo 2 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBPB enabled: NO (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
* Mitigation 2
* Kernel compiled with retpoline option: NO
* Kernel compiled with a retpoline-aware compiler: NO
* Retpoline enabled: NO
> STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with
retpoline are needed to mitigate the vulnerability)
CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
* Running as a Xen PV DomU: NO
> STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)
A false sense of security is worse than no security at all, see --disclaimer
[root@ov200 ~]#
So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:
* Kernel is compiled with IBRS/IBPB support: YES
while bios not, correct?
Is RH EL / CentOS expected to follow the retpoline option too, to mitigate
Variant 2, as done by Fedora for example?
Eg on my just updated Fedora 27 laptop I get now:
[g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
[sudo] password for g.cecchi:
Spectre and Meltdown mitigation detection tool v0.32
Checking for vulnerabilities on current system
Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018
x86_64
CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
Hardware check
* Hardware support (CPU microcode) for mitigation techniques
* Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available: NO
* CPU indicates IBRS capability: NO
* Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available: NO
* CPU indicates IBPB capability: NO
* Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available: NO
* CPU indicates STIBP capability: NO
* Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability: NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability: NO
* CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
* Vulnerable to Variant 1: YES
* Vulnerable to Variant 2: YES
* Vulnerable to Variant 3: YES
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Mitigated according to the /sys interface: NO (kernel confirms your
system is vulnerable)
> STATUS: VULNERABLE (Vulnerable)
CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigated according to the /sys interface: YES (kernel confirms that
the mitigation is active)
* Mitigation 1
* Kernel is compiled with IBRS/IBPB support: NO
* Currently enabled features
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* IBPB enabled: NO
* Mitigation 2
* Kernel compiled with retpoline option: YES
* Kernel compiled with a retpoline-aware compiler: YES (kernel reports
full retpoline compilation)
* Retpoline enabled: YES
> STATUS: NOT VULNERABLE (Mitigation: Full generic retpoline)
CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Mitigated according to the /sys interface: YES (kernel confirms that
the mitigation is active)
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
* Running as a Xen PV DomU: NO
> STATUS: NOT VULNERABLE (Mitigation: PTI)
A false sense of security is worse than no security at all, see --disclaimer
[g.cecchi@ope46 spectre_meltdown]$
BTW: I updated some days ago this laptop from F26 to F27 and I remember
Variant 1 was fixed in F26, while now I see it as vulnerable..... I'm going
to check with Fedora mailing list about this...
Another question: what should I see for a VM instead related to
meltdown/spectre?
Currently in "Guest CPU Type" in General subtab of the VM I only see
"Westmere"..
Should I also see anythin aout IBRS, etc...?
Thanks,
Gianluca
6 years, 10 months
Ovirt 4.2 - Help adding VM to numa node via python SDK
by Don Dupuis
I am able to create a vm using the sdk with nic and disks using the python
sdk, but having trouble understanding how to assign it to virtual numanode
onto the physical numanode via python sdk. Any help in this area would be
greatly appreciated
Thanks
Don
6 years, 10 months
Windows 2012R2 virtio-scsi abysmal disk write performance
by Jean-Francois Courteau
Hello there,
I just installed oVirt on brend new machines. The engine on a Virtualbox
VM in my current infrastructure, and a single dedicated CentOS host
attached to the engine.
Here are my host specs (srvhc02):
CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (4C 8T)
Mem: 32GB DDR3
Disks:
- 2 x WD Black 2TB (RAID1) where the CentOS is installed
- 4 x WD Gold 4TB (RAID10) dedicated for a storage domain (/dev/md0,
ext4, mounted in /storage and exported with NFS with default settings)
NIC: Intel 4 ports Gigabit. One port for VMs, one port for everything
else. 2 unused ports.
OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt deployed from the Engine using root account with a password
Here are my Engine specs (srvvmmgmt):
Virtualbox guest VM in another physical computer just for test purposes.
CPU: 2vCPU
Mem: 8GB
Disk: 50GB VDI on a RAID10 array
NIC: Virtual Intel Pro 1000 (Bridged over a physical Intel 4 ports
Gigabit)
OS: Freshly installed CentOS 7 minimal latest (yum updated)
oVirt Engine 4.2 deployed from the repo, yum updated yesterday
Storage domain: NFS - srvhc02:/storage
ISO domain: NFS - srvhc02:/iso
Physical network is Gigabit on Cisco switches, no VLAN tagging.
My (barely usable) Windows 2012R2 guest (srvweb03)
CPU: 2vCPU
Mem: 8GB
Disk1: 100GB on the storage domain
Disk 2: 200GB on the storage domain (this is the one I was changing the
controller for testing purposes)
NIC: virtio bridged over the VM port
I have tried every possible combination of drive controller
(virtio-scsi, virtio, IDE), virtio drivers (stable and latest from the
Fedora website, others found in sometimes obsucre places), and the disk
write performances are simply unacceptable. Of course I had a reboot
between each NIC driver update and driver change.
I have compared with a Virtualbox installed on the same host after I
cleaned up oVirt, and the host is absolutely not problematic. Here is
the compare:
oVirt Virtualbox
-----------------------------------------------
SEQUENTIAL READ 3000MB/s 406MB/s (this is
ridiculously high, oVirt probably reading from RAM)
SEQUENTIAL WRITE 31MB/s 164MB/s
4k RANDOM READ 400MB/s 7.18MB/s (this is
ridiculously high, oVirt probably reading from RAM)
4k RANDOM WRITE 0.3MB/s(80IOPS) 3MB/s (747IOPS)
These performances are using the virtio-scsi in oVirt (any version, none
gave me better performance), and the SATA driver in Virtualbox. Needless
to say that this is frustrating.
I have tried with oVirt virtio-blk (virtio in the interface) driver, it
gave me a 10-15% improvement, but never up to the level I could expect
from an enterprise grade solution. I also tried IDE. I could not even
write at 10kbps on it. I had approx 10IOPM (yes, IO per minute!) with
it.
Is there something I am missing?
--
https://nexcess.ca
JEAN-FRANÇOIS COURTEAU
_Président, Directeur Général_
C: jean-francois.courteau(a)nexcess.ca
T: +1 (418) 558-5169
-------------------------
6 years, 10 months