Upgrading self-Hosted engine from 4.3 to oVirt 4.4
by Adam Xu
Hi ovirt
I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4. I
followed the step in the document:
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
the old 4.3 env has a FC storage as engine storage domain and I have
created a new FC storage vv for the new storage domain to be used in the
next steps.
I backup the old 4.3 env and prepare a total new host to restore the env.
in charter 4.4 step 8, it said:
"During the deployment you need to provide a new storage domain. The
deployment script renames the 4.3 storage domain and retains its data."
it does rename the old storage domain. but it didn't let me choose a new
storage domain during the deployment. So the new enigne just deployed in
the new host's local storage and can not move to the FC storage domain.
Can anyone tell me what the problem is?
Thanks
--
Adam Xu
4 years, 2 months
AAA Extension mapping could not be found
by dominique.deschenes@gcgenicom.com
I tried to use AAA mapping. but I have this message
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.aaa.AAAServiceImpl run
INFO: Iteration: 0
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.core.ExtensionsToolExecutor main
SEVERE: Extension mapping could not be found
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.core.ExtensionsToolExecutor main
FINE: Exception:
org.ovirt.engine.core.extensions.mgr.ConfigurationException: Extension mapping could not be found
at org.ovirt.engine.core.extensions-manager//org.ovirt.engine.core.extensions.mgr.ExtensionsManager.getExtensionByName(ExtensionsManager.java:286)
at org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$AAAProfile.<init>(AAAServiceImpl.java:846)
at org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$Action.lambda$static$3(AAAServiceImpl.java:154)
at org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$Action.execute(AAAServiceImpl.java:417)
at org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl.run(AAAServiceImpl.java:686)
at org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.core.ExtensionsToolExecutor.main(ExtensionsToolExecutor.java:121)
at org.jboss.modules.Module.run(Module.java:352)
at org.jboss.modules.Module.run(Module.java:320)
at org.jboss.modules.Main.main(Main.java:593)
it's like it can't find for my mapping.properties file
I followed the Howto on this page.
https://www.ovirt.org/develop/release-management/features/infra/aaa_faq.html
Here my config :
tail /etc/ovirt-engine/extensions.d/local.lan-authn.properties
ovirt.engine.aaa.authn.authz.plugin ovirt.engine.extension.name = local.lan-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = local.lan
ovirt.engine.aaa.authn.authz.plugin = local.lan
ovirt.engine.aaa.authn.mapping.plugin = mapping
config.profile.file.1 = ../aaa/local.lan.properties
tail /etc/ovirt-engine/extensions.d/mapping.properties
ovirt.engine.extension.name = mapping
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc
ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.mapping.MappingExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping
config.mapUser.type = regex
config.mapUser.regex.pattern = ^(?<user>[^@]*)$
config.mapUser.regex.replacement = ${user}(a)domain.com
config.mapUser.regex.mustMatch = false
4 years, 2 months
New install
by info@worldhostess.com
I have an issue with new installation. The server name is changing all the
time and it keep disconnecting me. Plus it is impossible to log in as admin.
The install went through without any issues, but it seems something is
wrong.
Server: engine01.xxxx.com
Log in with your server user account.
Server: node01.xxxx.com
Log in with your server user account.
4 years, 2 months
routing policy problem in ovirt 4.4
by g.vasilopoulos@uoc.gr
Hello I created a cluster of ovirt 4.4 with 5 servers, I have a diferent gateway, ovirtmgmt and display network. In 4.4 routing is managed from network manager. Network manager seems to "forget" and recreate the route each time. So I cannot connect to the virtual machines console unless I run a ping for about a minute on the display network ip of the hypervisor, so network manager "wakes up" and "recreates" the route. I use this work around but it is not something that a user can do.
What is the proper way to do this ?
Thank you
4 years, 2 months
OVirt HCI Setup Wizard - Drive specification Ignored
by Jeremey Wise
Greetings:
3 x servers each server has 1 x 512GB ssd 2 x 1TB SSD JBOD
Goal: use HCI disk setup wizard to deploy initial structure
Each server has disk scanning in as different /dev/sd# and so trying to
use more clear /dev/mapper/<disk ID>
As such I set this per table below:
# Select each server and set each drive <<<<<<<<<<Double Check drive
device ID as they do NOT match up per host
# I transitioned to /dev/mapper object to avoid unclearness of /dev/sd#
thor
/dev/sdc
/dev/mapper/Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
odin
/dev/sdb
/dev/mapper/Micron_1100_MTFDDAV512TBN_17401F699137
medusa
/dev/sdb
/dev/mapper/SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306
# Note that drives need to be completely clear of any partition or file
system
[root@thor /]# gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): x
Expert command (? for help): z
About to wipe out GPT on /dev/sdc. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N): y
[image: image.png]
[image: image.png]
But deployment fails with Error:
<snip>
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices]
********
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml:38
ok: [thorst.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [odinst.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [medusast.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in
volume group] ***
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:37
TASK [gluster.infra/roles/backend_setup : update LVM fact's]
*******************
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:83
skipping: [thorst.penguinpages.local] => {"changed": false, "skip_reason":
"Conditional result was False"}
skipping: [odinst.penguinpages.local] => {"changed": false, "skip_reason":
"Conditional result was False"}
skipping: [medusast.penguinpages.local] => {"changed": false,
"skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Create thick logical volume]
*********
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:90
failed: [medusast.penguinpages.local] (item={'vgname': '
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": false, "err": " Volume group
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n
Cannot process volume group
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0,
"item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume
group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not
exist.", "rc": 5}
changed: [thorst.penguinpages.local] => (item={'vgname': '
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": true, "index": 0, "item": {"lvname":
"gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": ""}
failed: [odinst.penguinpages.local] (item={'vgname':
'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": false, "err": " Volume group
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n
Cannot process volume group
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0,
"item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume
group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not
exist.", "rc": 5}
NO MORE HOSTS LEFT
*************************************************************
NO MORE HOSTS LEFT
*************************************************************
PLAY RECAP
*********************************************************************
medusast.penguinpages.local : ok=23 changed=5 unreachable=0
failed=1 skipped=34 rescued=0 ignored=0
odinst.penguinpages.local : ok=23 changed=5 unreachable=0 failed=1
skipped=34 rescued=0 ignored=0
thorst.penguinpages.local : ok=30 changed=9 unreachable=0 failed=0
skipped=29 rescued=0 ignored=0
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for
more informations.
############
Why is oVirt ignoring when I set (and double check explicite device call
for deployment?
Attached is the ansible file it creates and then one I had to edit to
correct what wizard should have built it as.
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
New oVirt Install - Host Engine Deployment Fails
by mblanton@vnet.net
I am attempting a new oVirt install. I have two nodes installed (with oVirt Node 4.4). I have NFS shared storage for the hosted engine.
Both nodes are Dell quad core Xeon CPUs with 32GB of RAM. Both have been hypervisors before, XCP-NG and Proxmox. However I'm very interested to learn oVirt now.
The hosted engine deployment (through cockpit) fails during the "Finish" stage.
I do see the initial files created on the NFS storage.
[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute ''\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v' shorthand syntax and YAML in this task. Only one syntax may be used.\n"}
2020-09-13 17:39:56,507+0000 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute ''
\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_targ
et_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere in the file depending on the exact syntax problem.\
n\nThe offending line appears to be:\n\n# - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be bo
th 'k=v' shorthand syntax and YAML in this task. Only one syntax may be used.\n"
},
"ansible_task": "Convert CPU model name",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 1
}
I can see the host engine is created and running locally on the node.
I can event SSH into the HostedEngineLocal instance.
[root@ovirt-node01]# virsh --readonly list
Id Name State
-----------------------------------
1 HostedEngineLocal running
Looking at the "Convert CPU model name" task:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/ta...
set_fact:
cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
server_cpu_dict is good, I can find that in the logs, cluster_cpu is undefined.
But this is normal correct? The Cluster CPU type is "undefined" until the first host is added to the cluster.
The error makes it seems that server_cpu_dict and not cluster_cpu.type is the problem.
I'm not sure this is really the problem, but that is the only undefined variable I can find.
Any advice or recommendation is appreciated
-Thanks in advance
4 years, 2 months
Re: Random hosts disconnects
by Artur Socha
Hi Anton,
I am not sure if changing this value would fix the issue. Defaults are
pretty high. For example vdsHeartbeatInSeconds=30seconds,
vdsTimeout=180seconds, vdsConnectionTimeout=20seconds.
Do you still have relevant logs from the affected hosts:
* /var/logs/vdsm/vdsm.log*
* /var/logs/vdsm/supervdsm.log*
Please look for any jsonrpc errors ie. write/read errors or (connection)
timeouts. Storage related warnings/errors might also be relevant.
Plus system logs if possible:
*journalctl -f /usr/share/vdsm/vdsmd*
*journalctl -f /usr/sbin/libvirtd*
In order to get system logs from particular time period please combine it
with the following example using -S -U options:
*journalctl -S "2020-01-12 07:00:00" -U "2020-01-12 07:15:00" *
I haven't a clue what to look there for besides any warnings/errors or
anything else that seems .... unusual.
Artur
On Thu, Sep 17, 2020 at 8:09 AM Anton Louw via Users <users(a)ovirt.org>
wrote:
>
>
> Hi Everybody,
>
>
>
> Did some digging around, and saw a few things regarding “vdsHeartbeatInSeconds”
>
> I had a look at the properties file located at /etc/ovirt-engine/engine-config/engine-config.properties, and do not see an entry for “vdsHeartbeatInSeconds.type=Integer”.
>
> Seeing as these data centers are geographically split, could the “vdsHeartbeatInSeconds” potentially be the issue? Is it safe to increase this value after I add “vdsHeartbeatInSeconds.type=Integer” into my engine-config.properties file?
>
>
>
> Thanks
>
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
> [image: T] <https://www.twitter.com/voxtelecom>
> [image: I] <https://www.instagram.com/voxtelecomza/>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
> *From:* Anton Louw via Users <users(a)ovirt.org>
> *Sent:* 16 September 2020 09:01
> *To:* users(a)ovirt.org
> *Subject:* [ovirt-users] Random hosts disconnects
>
>
>
>
>
> Hi All,
>
>
>
> I have a strange issue in my oVirt environment. I currently have a
> standalone manager which is running in VMware. In my oVirt environment, I
> have two Data Centers. The manager is currently sitting on the same subnet
> as DC1. Randomly, hosts in DC2 will say “Not Responding” and then 2 seconds
> later, the hosts will activate again.
>
>
>
> The strange thing is, when the manager was sitting on the same subnet as
> DC2, hosts in DC1 will randomly say “Not Responding”
>
>
>
> I have tried going through the logs, but I cannot see anything out of the
> ordinary regarding why the hosts would drop connection. I have attached the
> engine.log for anybody that would like to do a spot check.
>
>
>
> Thanks
>
>
>
> *Anton Louw*
>
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
>
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
>
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
>
>
>
> [image: T] <https://www.twitter.com/voxtelecom>
>
>
>
> [image: I] <https://www.instagram.com/voxtelecomza>
>
>
>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
>
>
>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
>
>
>
>
> [image: #VoxBrand]
> <https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
>
>
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJL246IPBGE...
>
--
Artur Socha
Senior Software Engineer, RHV
Red Hat
4 years, 2 months
Bad volume specification
by Facundo Garat
Hi all,
I'm having some issues with one VM. The VM won't start and it's showing
problems with the virtual disks so I started the VM without any disks and
trying to hot adding the disk and that's fail too.
The servers are connected thru FC, all the other VMs are working fine.
Any ideas?.
Thanks!!
PS: The engine.log is showing this:
2020-09-15 20:10:37,926-03 INFO
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
2020-09-15 20:10:38,082-03 INFO
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
HotPlugDiskToVmCommand internal: false. Entities affected : ID:
71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
CONFIGURE_VM_STORAGE with role type USER
2020-09-15 20:10:38,117-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
f57ee9e
2020-09-15 20:10:38,125-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: <?xml version="1.0"
encoding="UTF-8"?><hotplug>
<devices>
<disk snapshot="no" type="block" device="disk">
<target dev="vda" bus="virtio"/>
<source
dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" io="native" type="qcow2" error_policy="stop"
cache="none"/>
<alias name="ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b"/>
<serial>f5bd2e15-a1ab-4724-883a-988b4dc7985b</serial>
</disk>
</devices>
<metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:vm>
<ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:poolID>00000001-0001-0001-0001-000000000311</ovirt-vm:poolID>
<ovirt-vm:volumeID>bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f</ovirt-vm:volumeID>
<ovirt-vm:imageID>f5bd2e15-a1ab-4724-883a-988b4dc7985b</ovirt-vm:imageID>
<ovirt-vm:domainID>55327311-e47c-46b5-b168-258c5924757b</ovirt-vm:domainID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
</hotplug>
2020-09-15 20:10:38,289-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
failed: General Exception: ("Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
2020-09-15 20:10:38,295-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
value 'StatusOnlyReturn [status=Status [code=100, message=General
Exception: ("Bad volume specification {'device': 'disk', 'type': 'disk',
'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)]]'
2020-09-15 20:10:38,295-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] HostName = nodo2
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
HotPlugDiskVDS, error = General Exception: ("Bad volume specification
{'device': 'disk', 'type': 'disk', 'diskType': 'block', 'specParams': {},
'alias': 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",), code = 100
2020-09-15 20:10:38,296-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] FINISH, HotPlugDiskVDSCommand,
return: , log id: f57ee9e
2020-09-15 20:10:38,296-03 ERROR
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error =
General Exception: ("Bad volume specification {'device': 'disk', 'type':
'disk', 'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",), code = 100 (Failed
with error GeneralException and code 100)
2020-09-15 20:10:38,307-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
USER_FAILED_HOTPLUG_DISK(2,001), Failed to plug disk owncloud_Disk0 to VM
owncloud (User: admin@internal-authz).
4 years, 2 months
Low Performance (KVM Vs VMware Hypervisor) When running multi-process application
by Rav Ya
Hello Everyone,
Please advice. Any help will be highly appreciated. Thank you in advance.
Test Setup:
1. oVirt Centos 7.8 Virtulization Host
2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
3. The vCPUs are configured for host pass through (Pinned CPU).
The Guest VM runs the application in userspace. The Application consists of
the parent process that reads packets in raw socket mode from the interface
and forwards then to child processes (~vCPUs) via IPC (shared memory –
pipes). *The performance (throughput / CPU utilization) that I get with KVM
is half of what I get with VMware.*
Any thoughts on the below observations? Any suggestions?
- KVM Guest VMs degraded performance when running multi-process
applications.
- High FUTEX time (Seen on the Guest VM when passing traffic).
- *High SY: *System CPU time spent in kernel space (Seen on both
Hypervisor and the Guest VMs only when running my application.)
-Rav Ya
4 years, 2 months