OVirt HCI Setup Wizard - Drive specification Ignored
by Jeremey Wise
Greetings:
3 x servers each server has 1 x 512GB ssd 2 x 1TB SSD JBOD
Goal: use HCI disk setup wizard to deploy initial structure
Each server has disk scanning in as different /dev/sd# and so trying to
use more clear /dev/mapper/<disk ID>
As such I set this per table below:
# Select each server and set each drive <<<<<<<<<<Double Check drive
device ID as they do NOT match up per host
# I transitioned to /dev/mapper object to avoid unclearness of /dev/sd#
thor
/dev/sdc
/dev/mapper/Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
odin
/dev/sdb
/dev/mapper/Micron_1100_MTFDDAV512TBN_17401F699137
medusa
/dev/sdb
/dev/mapper/SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306
# Note that drives need to be completely clear of any partition or file
system
[root@thor /]# gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): x
Expert command (? for help): z
About to wipe out GPT on /dev/sdc. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N): y
[image: image.png]
[image: image.png]
But deployment fails with Error:
<snip>
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices]
********
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml:38
ok: [thorst.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [odinst.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
ok: [medusast.penguinpages.local] => {"ansible_facts":
{"gluster_volumes_by_groupname": {}}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in
volume group] ***
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:37
TASK [gluster.infra/roles/backend_setup : update LVM fact's]
*******************
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:83
skipping: [thorst.penguinpages.local] => {"changed": false, "skip_reason":
"Conditional result was False"}
skipping: [odinst.penguinpages.local] => {"changed": false, "skip_reason":
"Conditional result was False"}
skipping: [medusast.penguinpages.local] => {"changed": false,
"skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Create thick logical volume]
*********
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml:90
failed: [medusast.penguinpages.local] (item={'vgname': '
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": false, "err": " Volume group
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n
Cannot process volume group
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0,
"item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume
group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not
exist.", "rc": 5}
changed: [thorst.penguinpages.local] => (item={'vgname': '
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": true, "index": 0, "item": {"lvname":
"gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": ""}
failed: [odinst.penguinpages.local] (item={'vgname':
'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'lvname':
'gluster_lv_engine', 'size': '1000G'}) => {"ansible_index_var": "index",
"ansible_loop_var": "item", "changed": false, "err": " Volume group
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\" not found.\n
Cannot process volume group
gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L\n", "index": 0,
"item": {"lvname": "gluster_lv_engine", "size": "1000G", "vgname":
"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg": "Volume
group gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L does not
exist.", "rc": 5}
NO MORE HOSTS LEFT
*************************************************************
NO MORE HOSTS LEFT
*************************************************************
PLAY RECAP
*********************************************************************
medusast.penguinpages.local : ok=23 changed=5 unreachable=0
failed=1 skipped=34 rescued=0 ignored=0
odinst.penguinpages.local : ok=23 changed=5 unreachable=0 failed=1
skipped=34 rescued=0 ignored=0
thorst.penguinpages.local : ok=30 changed=9 unreachable=0 failed=0
skipped=29 rescued=0 ignored=0
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for
more informations.
############
Why is oVirt ignoring when I set (and double check explicite device call
for deployment?
Attached is the ansible file it creates and then one I had to edit to
correct what wizard should have built it as.
--
p <jeremey.wise(a)gmail.com>enguinpages
2 years, 6 months
New oVirt Install - Host Engine Deployment Fails
by mblanton@vnet.net
I am attempting a new oVirt install. I have two nodes installed (with oVirt Node 4.4). I have NFS shared storage for the hosted engine.
Both nodes are Dell quad core Xeon CPUs with 32GB of RAM. Both have been hypervisors before, XCP-NG and Proxmox. However I'm very interested to learn oVirt now.
The hosted engine deployment (through cockpit) fails during the "Finish" stage.
I do see the initial files created on the NFS storage.
[ INFO ] TASK [ovirt.hosted_engine_setup : Convert CPU model name]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute ''\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v' shorthand syntax and YAML in this task. Only one syntax may be used.\n"}
2020-09-13 17:39:56,507+0000 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute ''
\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_targ
et_hosted_engine_vm.yml': line 105, column 16, but may\nbe elsewhere in the file depending on the exact syntax problem.\
n\nThe offending line appears to be:\n\n# - debug: var=server_cpu_dict\n ^ here\n\nThere appears to be bo
th 'k=v' shorthand syntax and YAML in this task. Only one syntax may be used.\n"
},
"ansible_task": "Convert CPU model name",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 1
}
I can see the host engine is created and running locally on the node.
I can event SSH into the HostedEngineLocal instance.
[root@ovirt-node01]# virsh --readonly list
Id Name State
-----------------------------------
1 HostedEngineLocal running
Looking at the "Convert CPU model name" task:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/ta...
set_fact:
cluster_cpu_model: "{{ server_cpu_dict[cluster_cpu.type] }}"
server_cpu_dict is good, I can find that in the logs, cluster_cpu is undefined.
But this is normal correct? The Cluster CPU type is "undefined" until the first host is added to the cluster.
The error makes it seems that server_cpu_dict and not cluster_cpu.type is the problem.
I'm not sure this is really the problem, but that is the only undefined variable I can find.
Any advice or recommendation is appreciated
-Thanks in advance
2 years, 6 months
Re: Random hosts disconnects
by Artur Socha
Hi Anton,
I am not sure if changing this value would fix the issue. Defaults are
pretty high. For example vdsHeartbeatInSeconds=30seconds,
vdsTimeout=180seconds, vdsConnectionTimeout=20seconds.
Do you still have relevant logs from the affected hosts:
* /var/logs/vdsm/vdsm.log*
* /var/logs/vdsm/supervdsm.log*
Please look for any jsonrpc errors ie. write/read errors or (connection)
timeouts. Storage related warnings/errors might also be relevant.
Plus system logs if possible:
*journalctl -f /usr/share/vdsm/vdsmd*
*journalctl -f /usr/sbin/libvirtd*
In order to get system logs from particular time period please combine it
with the following example using -S -U options:
*journalctl -S "2020-01-12 07:00:00" -U "2020-01-12 07:15:00" *
I haven't a clue what to look there for besides any warnings/errors or
anything else that seems .... unusual.
Artur
On Thu, Sep 17, 2020 at 8:09 AM Anton Louw via Users <users(a)ovirt.org>
wrote:
>
>
> Hi Everybody,
>
>
>
> Did some digging around, and saw a few things regarding “vdsHeartbeatInSeconds”
>
> I had a look at the properties file located at /etc/ovirt-engine/engine-config/engine-config.properties, and do not see an entry for “vdsHeartbeatInSeconds.type=Integer”.
>
> Seeing as these data centers are geographically split, could the “vdsHeartbeatInSeconds” potentially be the issue? Is it safe to increase this value after I add “vdsHeartbeatInSeconds.type=Integer” into my engine-config.properties file?
>
>
>
> Thanks
>
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
> [image: T] <https://www.twitter.com/voxtelecom>
> [image: I] <https://www.instagram.com/voxtelecomza/>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
> *From:* Anton Louw via Users <users(a)ovirt.org>
> *Sent:* 16 September 2020 09:01
> *To:* users(a)ovirt.org
> *Subject:* [ovirt-users] Random hosts disconnects
>
>
>
>
>
> Hi All,
>
>
>
> I have a strange issue in my oVirt environment. I currently have a
> standalone manager which is running in VMware. In my oVirt environment, I
> have two Data Centers. The manager is currently sitting on the same subnet
> as DC1. Randomly, hosts in DC2 will say “Not Responding” and then 2 seconds
> later, the hosts will activate again.
>
>
>
> The strange thing is, when the manager was sitting on the same subnet as
> DC2, hosts in DC1 will randomly say “Not Responding”
>
>
>
> I have tried going through the logs, but I cannot see anything out of the
> ordinary regarding why the hosts would drop connection. I have attached the
> engine.log for anybody that would like to do a spot check.
>
>
>
> Thanks
>
>
>
> *Anton Louw*
>
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
>
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
>
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
>
>
>
> [image: T] <https://www.twitter.com/voxtelecom>
>
>
>
> [image: I] <https://www.instagram.com/voxtelecomza>
>
>
>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
>
>
>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
>
>
>
>
> [image: #VoxBrand]
> <https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
>
>
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJL246IPBGE...
>
--
Artur Socha
Senior Software Engineer, RHV
Red Hat
2 years, 6 months
Bad volume specification
by Facundo Garat
Hi all,
I'm having some issues with one VM. The VM won't start and it's showing
problems with the virtual disks so I started the VM without any disks and
trying to hot adding the disk and that's fail too.
The servers are connected thru FC, all the other VMs are working fine.
Any ideas?.
Thanks!!
PS: The engine.log is showing this:
2020-09-15 20:10:37,926-03 INFO
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
2020-09-15 20:10:38,082-03 INFO
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
HotPlugDiskToVmCommand internal: false. Entities affected : ID:
71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
CONFIGURE_VM_STORAGE with role type USER
2020-09-15 20:10:38,117-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
f57ee9e
2020-09-15 20:10:38,125-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: <?xml version="1.0"
encoding="UTF-8"?><hotplug>
<devices>
<disk snapshot="no" type="block" device="disk">
<target dev="vda" bus="virtio"/>
<source
dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" io="native" type="qcow2" error_policy="stop"
cache="none"/>
<alias name="ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b"/>
<serial>f5bd2e15-a1ab-4724-883a-988b4dc7985b</serial>
</disk>
</devices>
<metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:vm>
<ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:poolID>00000001-0001-0001-0001-000000000311</ovirt-vm:poolID>
<ovirt-vm:volumeID>bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f</ovirt-vm:volumeID>
<ovirt-vm:imageID>f5bd2e15-a1ab-4724-883a-988b4dc7985b</ovirt-vm:imageID>
<ovirt-vm:domainID>55327311-e47c-46b5-b168-258c5924757b</ovirt-vm:domainID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
</hotplug>
2020-09-15 20:10:38,289-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
failed: General Exception: ("Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
2020-09-15 20:10:38,295-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
value 'StatusOnlyReturn [status=Status [code=100, message=General
Exception: ("Bad volume specification {'device': 'disk', 'type': 'disk',
'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)]]'
2020-09-15 20:10:38,295-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] HostName = nodo2
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
HotPlugDiskVDS, error = General Exception: ("Bad volume specification
{'device': 'disk', 'type': 'disk', 'diskType': 'block', 'specParams': {},
'alias': 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",), code = 100
2020-09-15 20:10:38,296-03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] FINISH, HotPlugDiskVDSCommand,
return: , log id: f57ee9e
2020-09-15 20:10:38,296-03 ERROR
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error =
General Exception: ("Bad volume specification {'device': 'disk', 'type':
'disk', 'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'00000001-0001-0001-0001-000000000311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",), code = 100 (Failed
with error GeneralException and code 100)
2020-09-15 20:10:38,307-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
USER_FAILED_HOTPLUG_DISK(2,001), Failed to plug disk owncloud_Disk0 to VM
owncloud (User: admin@internal-authz).
2 years, 6 months
Low Performance (KVM Vs VMware Hypervisor) When running multi-process application
by Rav Ya
Hello Everyone,
Please advice. Any help will be highly appreciated. Thank you in advance.
Test Setup:
1. oVirt Centos 7.8 Virtulization Host
2. Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
3. The vCPUs are configured for host pass through (Pinned CPU).
The Guest VM runs the application in userspace. The Application consists of
the parent process that reads packets in raw socket mode from the interface
and forwards then to child processes (~vCPUs) via IPC (shared memory –
pipes). *The performance (throughput / CPU utilization) that I get with KVM
is half of what I get with VMware.*
Any thoughts on the below observations? Any suggestions?
- KVM Guest VMs degraded performance when running multi-process
applications.
- High FUTEX time (Seen on the Guest VM when passing traffic).
- *High SY: *System CPU time spent in kernel space (Seen on both
Hypervisor and the Guest VMs only when running my application.)
-Rav Ya
2 years, 6 months
[ANN] oVirt 4.4.3 First Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 First Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
First Release Candidate for testing, as of September 17th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG will be available soon for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 6 months
UPN suffix and my SAM account do not have the sames domains
by dominique.deschenes@gcgenicom.com
I have a SSO authentication question.
In my Active Directory, my UPN suffix login and my SAM account do not have the sames domains.
Example :
UPN : dominique.test(a)abc.com
Sam account : domtest(a)local.lan
On the VM portal, I am able to connect with dominique.test(a)abc.com but not with domtest(a)local.lan (I have the following message:server_error: Cannot resolve principal 'domtest(a)local.lan'
If I connect to the VM PORTAL with dominique.test(a)abc.com, when I click on "RDP Console" to get the link, the login of the RDP is dominique.test(a)local.lan. So I have to manually change the login to dominique.test(a)abc.com.
Is it possible to use the Sam account domtes(a)local.lan instead of the UPN ?
if not, is it possible to adjust the consol.rdp configuration and include the domain abc.com instead of local.lan ?
In research I found this, but I don't have access.
How to authenticate with Active directory when UPN suffix and Forest Name are different using AAA LDAP extension - Red Hat C
https://access.redhat.com/solutions/2332561
Thank you
2 years, 6 months
Re: Random hosts disconnects
by Anton Louw
Hi Everybody,
Did some digging around, and saw a few things regarding “vdsHeartbeatInSeconds”
I had a look at the properties file located at /etc/ovirt-engine/engine-config/engine-config.properties, and do not see an entry for “vdsHeartbeatInSeconds.type=Integer”.
Seeing as these data centers are geographically split, could the “vdsHeartbeatInSeconds” potentially be the issue? Is it safe to increase this value after I add “vdsHeartbeatInSeconds.type=Integer” into my engine-config.properties file?
Thanks
Anton Louw
Cloud Engineer: Storage and Virtualization
______________________________________
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.louw(a)voxtelecom.co.za
www.vox.co.za
From: Anton Louw via Users <users(a)ovirt.org>
Sent: 16 September 2020 09:01
To: users(a)ovirt.org
Subject: [ovirt-users] Random hosts disconnects
Hi All,
I have a strange issue in my oVirt environment. I currently have a standalone manager which is running in VMware. In my oVirt environment, I have two Data Centers. The manager is currently sitting on the same subnet as DC1. Randomly, hosts in DC2 will say “Not Responding” and then 2 seconds later, the hosts will activate again.
The strange thing is, when the manager was sitting on the same subnet as DC2, hosts in DC1 will randomly say “Not Responding”
I have tried going through the logs, but I cannot see anything out of the ordinary regarding why the hosts would drop connection. I have attached the engine.log for anybody that would like to do a spot check.
Thanks
Anton Louw
Cloud Engineer: Storage and Virtualization at Vox
________________________________
T: 087 805 0000 | D: 087 805 1572
M: N/A
E: anton.louw(a)voxtelecom.co.za<mailto:anton.louw@voxtelecom.co.za>
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
www.vox.co.za<http://www.vox.co.za>
[F]<https://www.facebook.com/voxtelecomZA>
[T]<https://www.twitter.com/voxtelecom>
[I]<https://www.instagram.com/voxtelecomza>
[L]<https://www.linkedin.com/company/voxtelecom>
[Y]<https://www.youtube.com/user/VoxTelecom>
[#VoxBrand]<https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
Disclaimer
The contents of this email are confidential to the sender and the intended recipient. Unless the contents are clearly and entirely of a personal nature, they are subject to copyright in favour of the holding company of the Vox group of companies. Any recipient who receives this email in error should immediately report the error to the sender and permanently delete this email from all storage devices.
This email has been scanned for viruses and malware, and may have been automatically archived by Mimecast Ltd, an innovator in Software as a Service (SaaS) for business. Providing a safer and more useful place for your human generated data. Specializing in; Security, archiving and compliance. To find out more Click Here<https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
2 years, 6 months
Ovirt legacy migration policy is missing
by ovirtand-cnj342@yahoo.com
Hello,
We have many cases of failed migrations, and reducing the load on the respective VMs made migration possible. Using "Suspend workload when needed" did not help either with the migrations, which only worked when stopping services on the VMs, thus reducing load.
I was therefore trying to tweak the migration downtime setting. The suggested way of doing this is by using the Legacy migration policy.
That being said, I am unable to find the Legacy migration policy in the VM Host tab, nor in the cluster's "Migration policy" tab. I cannot find any mention of it being left out of our current version (4.3.6.1) - moreover, the setting "Use custom migration downtime" (only available when using the Legacy migration policy) does exist in the interface - but I cannot find the Legacy policy itself.
The only policies present are Minimal downtime (default for cluster), Post-copy migration and Suspend workload when needed.
Is there a way to make the Legacy migration policy available if it's missing?
Thanks!
Best regards,
Andy
PS. Setting DefaultMaximumMigrationDowntime=20000 as default did not have any effect, presumably because the system uses one of the 3 pre-defined policies and ignores the "default"
2 years, 6 months
Low Performance (KVM Vs VMware Hypervisor) When running multi-process application
by Ravin Ya
Hello Everyone,
Please advice. Any help will be highly appreciated. Thank you in advance.
Test Setup:
oVirt Centos 7.8 Virtulization Host
Guest VM Centos 7.8 (Mutiqueue enabled 6 vCPUs with 6 Rx Tx Queues)
The vCPUs are configured for host pass through (Pinned CPU).
The Guest VM runs the application in userspace. The Application consists of the parent process that reads packets in raw socket mode from the interface and forwards then to child processes (~vCPUs) via IPC (shared memory – pipes). The performance (throughput / CPU utilization) that I get with KVM is half of what I get with VMware.
Any thoughts on the below observations? Any suggestions?
KVM Guest VMs degraded performance when running multi-process applications.
High FUTEX time (Seen on the Guest VM when passing traffic).
High SY: System CPU time spent in kernel space (Seen on both Hypervisor and the Guest VMs only when running my application.)
-Rav Ya
2 years, 6 months