Terrible Disk Performance on Windows 10 VM
by regloff@gmail.com
I recently installed a Windows 10 VM under oVirt 4.4.5.11-1.el8
Also installed the drivers using "virtio-win-1.9.16.iso" (Then re-installed them after updates just in case it helped)
I found a similar complaint with VMWare (https://communities.vmware.com/t5/VMware-Workstation-Pro/vmware-workstati...)
So I looked into that and made a registry change for the AHCI controller, as well as setting the 'viodiskcache' to write-back (Seen in another thread on here) - those two changes seemed to help.. marginally. But not much at all.
When I do just about anything, disk usage spikes to 100% and stays there for quite a while. Write speeds rarely break 100kb/sec.
Not even sure what to look for next. My Linux VMs don't seem to have this issue and the host it's running on is barely working at all. CPU and memory stay close to unused. oVirt didn't show a lot, but in task manager in the Windows VM - you can see disk queue just pegged completely.
I've given the VM 6GB of RAM, so that's not it. I even turned off paging in the Windows VM as well, to no avail.
This is an example of disk usage, just opening 'Groove Music' for the first time.
https://i.postimg.cc/FRLq28Mw/Disk-Activity.png
Any ideas? :)
3 years, 4 months
Question about pci passthrough for guest (SSD passthrough) ?
by Tony Pearce
I have recently added a fresh installed host on 4.4, with 3 x nvidia gpu's
which have been passed through to a guest VM instance. This went very
smoothly and the guest can use all 3 host GPUs.
The next thing we did was to configure "local storage" so that the single
guest instance can make use of faster nvme storage (100,000 iops) compared
to the network iscsi storage which is rated at 35,000 iops.
The caveat with local storage is that I can only use the remaining free
space in /var/ for disk images. The result is the 1TB SSD has around
700GB remaining free space.
So I was wondering about simply passing through the nvme ssd (PCI) to the
guest, so the guest can utilise the fill SSD.
Are there any "gotcha's" with doing this other than the usual gpu
passthrough ones?
Also my apologies if this is duplicated. I originally asked this [1] a
couple of days ago but I am not sure what happened.
Kind regards,
Tony Pearce
[1] Question about pci pass-thru - Users - Ovirt List Archives
<https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BN7XHUN2DP6R...>
3 years, 4 months
Combining Virtual machine image with multiple disks attached
by kkchn.in@gmail.com
I have few VMs in Redhat Virtualisation environment RHeV ( using Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image with their attached volume disks along with it.
There are three folders which contain images for each VM .
These folders contain the base OS image, and attached LVM disk images ( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single image file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image and that image can upload to our cloud glance storage as a single image ?
Is this possible or Am I asking a Stupid question which theoretically not possible ?
Thanks in Advance for sharing your thoughts,
Kris.
3 years, 4 months
Restored engine backup: The provided authorization grant for the auth code has expired.
by Nicolás
Hi,
I'm restoring a full ovirt engine backup, having used the --scope=all
option, for oVirt 4.3.
I restored the backup on a fresh CentOS7 machine. The process went well,
but when trying to log into the restored authentication system I get the
following message which won't allow me to log in:
The provided authorization grant for the auth code has expired.
What does that mean and how can it be fixed?
Thanks.
Nicolás
3 years, 4 months
Accessing the oVirt Console Remotely
by louisb@ameritech.net
I've installed ovirt on the my server (HPDL380Gen10) and was able to start the configuration process on the local machine. I'm tried to access the ovirt console remotely via the web however I've had no success. I'm using the same URL that is used locally, but when I execute the URL via my Firefox browser I get the message that it is unable to connect. I've drop the firewall on both the server and client machine as a possible solution but it did not work.
Is there something extra that is needed to access the ovirt console remotely?
The FQDN is entered into my DNS, however, I did find that during the configuration process that IP was consumed by ovirtmgmt. I was surprised to see that I'm just assuming that its apart of congiruation process. I have other ports on my sever and I have entered them in my DNS also with the same name but a different IP address.
What must I do to gain access to the ovirt console remotely, it would surely make the configuration process easy from my desk vs being in the server area.
Thanks
3 years, 4 months
Self hosted engine installation - Failed to deploy the VM on ISCSI storage
by ericsitunew@gmail.com
I was trying to install the latest ovirt engine v4.4.7 but it failed at the following step when I was using the command line.
I have tried installing as user root or other sudoers, but same issue.
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy configuration archive to storage]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-7c3bcc745b16", "of=/rhev/data-center/mnt/blockSD/61612a87-4b25-4e09-aecd-02b63c679cf1/images/f9ebe70c-58f4-4229-a149-a18a09571d08/5607a2c3-7e94-4403-ad86-xxxxxxxxxxxx"], "delta": "0:00:00.011759", "end": "2020-08-14 09:03:35.702527", "msg": "non-zero return code", "rc": 1, "start": "2020-08-14 09:03:35.690768", "stderr": "dd: failed to open â/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-7c3bcc745b16â: Permission denied", "stderr_lines": ["dd: failed to open â/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-xxxxxxxxxxxx: Permission denied"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
Just curious if anyone hits the same issue? Thank you.
3 years, 4 months
Host not becoming active due to VDSM failure
by Vinícius Ferrão
Hello,
I have a host that's failing to bring up VDSM, the logs don't say anything specific, but there's a Python error about DHCP on it. Is there anyone with a similar issue?
[root@rhvpower ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager.
Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager.
Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
[root@rhvpower ~]# systemctl start vdsmd
A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
On the logs I got the following messages:
==> /var/log/vdsm/upgrade.log <==
MainThread::DEBUG::2021-07-30 12:34:55,143::libvirtconnection::168::root::(get) trying to connect libvirt
MainThread::INFO::2021-07-30 12:34:55,167::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf
MainThread::INFO::2021-07-30 12:34:55,178::netconfpersistence::188::root::(save) Saved new config RunningConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices]
MainThread::INFO::2021-07-30 12:34:55,179::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/persistence/netconf
MainThread::INFO::2021-07-30 12:34:55,188::netconfpersistence::188::root::(save) Saved new config PersistentConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/persistence/netconf/nets,/var/lib/vdsm/persistence/netconf/bonds,/var/lib/vdsm/persistence/netconf/devices]
MainThread::DEBUG::2021-07-30 12:34:55,188::cmdutils::130::root::(exec_cmd) /usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
==> /var/log/vdsm/supervdsm.log <==
restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::69::root::(_restore_sriov_config) Non persisted SRIOV devices found: {'0033:01:00.0', '0003:01:00.0'}
restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::458::root::(restore) starting network restoration.
restore-net::DEBUG::2021-07-30 12:34:55,942::restore_net_config::366::root::(_wait_for_for_all_devices_up) All devices are up.
restore-net::DEBUG::2021-07-30 12:34:55,968::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
restore-net::DEBUG::2021-07-30 12:34:55,989::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
restore-net::DEBUG::2021-07-30 12:34:56,087::plugin::261::root::(_check_version_mismatch) NetworkManager version 1.30.0
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 started
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 started
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes3 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-other started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes4 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge ovirtmgmt started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bond bond0 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.192 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2020 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2022 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2024 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.172 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2021 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2023 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.200 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes1 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes2 started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge nfs started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge servers started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started
restore-net::DEBUG::2021-07-30 12:34:56,092::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 finished
restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 finished
restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes3 finished
restore-net::DEBUG::2021-07-30 12:34:56,094::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-other finished
restore-net::DEBUG::2021-07-30 12:34:56,095::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes4 finished
restore-net::DEBUG::2021-07-30 12:34:56,096::context::153::root::(finish_async) Async action: Retrieve applied config: bridge ovirtmgmt finished
restore-net::DEBUG::2021-07-30 12:34:56,097::context::153::root::(finish_async) Async action: Retrieve applied config: bond bond0 finished
restore-net::DEBUG::2021-07-30 12:34:56,098::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.192 finished
restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2020 finished
restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2022 finished
restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2024 finished
restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.172 finished
restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2021 finished
restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2023 finished
restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.200 finished
restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes1 finished
restore-net::DEBUG::2021-07-30 12:34:56,103::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes2 finished
restore-net::DEBUG::2021-07-30 12:34:56,105::context::153::root::(finish_async) Async action: Retrieve applied config: bridge nfs finished
restore-net::DEBUG::2021-07-30 12:34:56,106::context::153::root::(finish_async) Async action: Retrieve applied config: bridge servers finished
restore-net::DEBUG::2021-07-30 12:34:56,107::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore
unified_restoration()
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration
classified_conf = _classify_nets_bonds_config(available_config)
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config
net_info = NetInfo(netswitch.configurator.netinfo())
File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo
_netinfo = netinfo_get(vdsmnets, compatibility)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get
return _get(vdsmnets)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get
extra_info.update(_get_devices_info_from_nmstate(state, devices))
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate
nmstate.get_interfaces(state, filter=devices)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp>
for ifname, ifstate in six.viewitems(
File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled
return util_is_dhcp_enabled(family_info)
File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled
return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP]
KeyError: 'dhcp'
The engine is fencing the machine constantly but it reboots and come back with the same issue after reboot.
Thanks all.
3 years, 4 months
CPU performance test results in three scenarios
by Tommy Sway
Hello, I recently conducted a test to compare the CPU performance of a
physical machine, a high performance VM (pass-through CPU, PIN CPU), and a
common VM.
Here are the results:
1. Physical machine:
# sysbench --test=cpu --cpu-max-prime=100000000 --threads=2 run
WARNING: the --test option is deprecated. You can pass a script name or path
on the command line without any options.
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 2
Initializing random number generator from current time
Prime numbers limit: 100000000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 0.01
General statistics:
total time: 346.0588s
total number of events: 2
Latency (ms):
min: 345943.52
avg: 346001.09
max: 346058.65
95th percentile: 100000.00
sum: 692002.17
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 346.0011/0.06
2. High Performance VM:
# sysbench --test=cpu --cpu-max-prime=100000000 --threads=2 run
WARNING: the --test option is deprecated. You can pass a script name or path
on the command line without any options.
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 2
Initializing random number generator from current time
Prime numbers limit: 100000000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 0.01
General statistics:
total time: 351.8625s
total number of events: 2
Latency (ms):
min: 351416.59
avg: 351639.01
max: 351861.43
95th percentile: 100000.00
sum: 703278.02
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 351.6390/0.22
3. Common VM:
# sysbench --test=cpu --cpu-max-prime=100000000 --threads=2 run
WARNING: the --test option is deprecated. You can pass a script name or path
on the command line without any options.
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 2
Initializing random number generator from current time
Prime numbers limit: 100000000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 0.01
General statistics:
total time: 354.9108s
total number of events: 2
Latency (ms):
min: 354761.26
avg: 354835.99
max: 354910.73
95th percentile: 100000.00
sum: 709671.99
Threads fairness:
events (avg/stddev): 1.0000/0.00
execution time (avg/stddev): 354.8360/0.07
Result:
There is little difference in CPU performance between these three scenarios,
so why distinguish between high performance and regular virtual machines? I
don't think it makes much sense.
3 years, 4 months
The provided authorization grant for the auth code has expired.
by Nicolás
Hi,
A time ago I posted a similar question and I couldn't get it solved. I
couldn't spend more time on this up until now, so I'm trying again and
having the same error, which I'm unable to fix. I would really
appreciate some help since I cannot find any documentation for this.
I'm restoring a full ovirt engine backup, having used the --scope=all
option, for oVirt 4.3.
I restored the backup on a fresh CentOS7 machine. The process went well,
but when trying to log into the restored authentication system I get the
following message which won't allow me to log in:
The provided authorization grant for the auth code has expired.
What does that mean and how can it be fixed?
I'm also attaching a screenshot.
Thanks.
Nicolás
3 years, 4 months