Re: Promiscuous Mode
by thomas@hoberg.net
Not sure it will actually do that, but if you create a new network profile, you can select a 'port mirroring' option: It is my understanding that is what you need. You may also want to deselect the filtering rules in the same place.
4 years, 3 months
can't mount an export domain with 4K block size (VDO) as PosixFS (xfs)
by thomas@hoberg.net
After this (https://devconfcz2020a.sched.com/event/YOtG/ovirt-4k-teaching-an-old-dog-...)
I sure do not expect this (log below):
Actually I am trying to evaluate just how portable oVirt storage is and this case I had prepared a USB3 HDD with VDO, which I could literally move between farms to transport VMs.
Logical disks are typically large for simplicity within the VMs, QCOW2 and VDO assumed to compensate for this 'lack of planning' while the allocated storage easily fits the HDD.
Once I got beyond the initial issues, I came across this somewhat unexpected issue: VDO storage uses 4k blocks all around, but evidently when you mount an export domain (or I guess any domain) as POSIX, 512byte blocks are assumed somewhere and 4k blocks rejected.
I'd say that is a bug in VDSM, right?
Or is there anything in the mount options to fix this?
020-08-31 18:44:40,424+0200 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:48)
2020-08-31 18:44:40,425+0200 INFO (periodic/0) [vdsm.api] FINISH repoStats return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': '1.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}, '25d95783-44df-4fda-b642-55fe09162149': {'code': 0, 'lastCheck': '2.2', 'delay': '0.00256944', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}, '148d9e9e-d7b8-4220-9ee8-057da96e608c': {'code': 0, 'lastCheck': '2.2', 'delay': '0.00050217', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}} from=internal, task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:54)
2020-08-31 18:44:41,121+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:41,412+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:41,414+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=['9992dc21-edf2-4951-9020-7c78f1220e02']) from=::1,44782, task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 (api:48)
2020-08-31 18:44:41,414+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': '2.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}} from=::1,44782, task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 (api:54)
2020-08-31 18:44:41,414+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:41,965+0200 INFO (jsonrpc/3) [IOProcessClient] (Global) Starting client (__init__:308)
2020-08-31 18:44:41,984+0200 INFO (ioprocess/87332) [IOProcess] (Global) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,006+0200 INFO (jsonrpc/3) [storage.StorageDomainCache] Removing domain fe9fb0db-2743-457a-80f0-9a4edc509e9d from storage domain cache (sdc:211)
2020-08-31 18:44:42,006+0200 INFO (jsonrpc/3) [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
2020-08-31 18:44:42,006+0200 INFO (jsonrpc/3) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '00000000-0000-0000-0000-000000000000', 'status': 0}]} from=::ffff:192.168.0.87,40378, flow_id=05fe72ef-c8ac-4e03-8453-5171e5fc5f8b, task_id=b92d4144-a9db-4423-ba6b-35934d4f9200 (api:54)
2020-08-31 18:44:42,008+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 3.11 seconds (__init__:312)
2020-08-31 18:44:42,063+0200 INFO (jsonrpc/5) [vdsm.api] START getStorageDomainsList(spUUID='00000000-0000-0000-0000-000000000000', domainClass=3, storageType='', remotePath='/dev/mapper/vdo1', options=None) from=::ffff:192.168.0.87,40378, flow_id=eff548f9-b663-45dd-b8a5-5854f9b5dde8, task_id=98ca66a5-edee-40d4-9253-6a46409241cc (api:48)
2020-08-31 18:44:42,063+0200 INFO (jsonrpc/5) [storage.StorageDomainCache] Refreshing storage domain cache (resize=True) (sdc:80)
2020-08-31 18:44:42,063+0200 INFO (jsonrpc/5) [storage.ISCSI] Scanning iSCSI devices (iscsi:442)
2020-08-31 18:44:42,101+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:42,128+0200 INFO (jsonrpc/5) [storage.ISCSI] Scanning iSCSI devices: 0.06 seconds (utils:390)
2020-08-31 18:44:42,129+0200 INFO (jsonrpc/5) [storage.HBA] Scanning FC devices (hba:60)
2020-08-31 18:44:42,182+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:42,184+0200 INFO (jsonrpc/4) [vdsm.api] START repoStats(domains=['9992dc21-edf2-4951-9020-7c78f1220e02']) from=::1,44782, task_id=7b3f1eb8-f7e3-4467-93a1-27ef9721c90c (api:48)
2020-08-31 18:44:42,184+0200 INFO (jsonrpc/4) [vdsm.api] FINISH repoStats return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': '3.4', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}} from=::1,44782, task_id=7b3f1eb8-f7e3-4467-93a1-27ef9721c90c (api:54)
2020-08-31 18:44:42,184+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
2020-08-31 18:44:42,277+0200 INFO (jsonrpc/5) [storage.HBA] Scanning FC devices: 0.15 seconds (utils:390)
2020-08-31 18:44:42,309+0200 INFO (jsonrpc/5) [storage.Multipath] Resizing multipath devices (multipath:104)
2020-08-31 18:44:42,312+0200 INFO (jsonrpc/5) [storage.Multipath] Resizing multipath devices: 0.00 seconds (utils:390)
2020-08-31 18:44:42,312+0200 INFO (jsonrpc/5) [storage.StorageDomainCache] Refreshing storage domain cache: 0.25 seconds (utils:390)
2020-08-31 18:44:42,418+0200 INFO (tmap-569/0) [IOProcessClient] (_dev_mapper_vdo1) Starting client (__init__:308)
2020-08-31 18:44:42,445+0200 INFO (tmap-569/1) [IOProcessClient] (glusterSD/nucvirt.mtk.hoberg.net:_data) Starting client (__init__:308)
2020-08-31 18:44:42,449+0200 INFO (ioprocess/87496) [IOProcess] (_dev_mapper_vdo1) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,457+0200 INFO (tmap-569/2) [IOProcessClient] (glusterSD/nucvirt.mtk.hoberg.net:_vmstore) Starting client (__init__:308)
2020-08-31 18:44:42,465+0200 INFO (ioprocess/87508) [IOProcess] (glusterSD/nucvirt.mtk.hoberg.net:_data) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,478+0200 INFO (tmap-569/3) [IOProcessClient] (glusterSD/nucvirt.mtk.hoberg.net:_engine) Starting client (__init__:308)
2020-08-31 18:44:42,486+0200 INFO (ioprocess/87516) [IOProcess] (glusterSD/nucvirt.mtk.hoberg.net:_vmstore) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,514+0200 INFO (ioprocess/87533) [IOProcess] (glusterSD/nucvirt.mtk.hoberg.net:_engine) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,522+0200 INFO (jsonrpc/5) [IOProcessClient] (fe9fb0db-2743-457a-80f0-9a4edc509e9d) Starting client (__init__:308)
2020-08-31 18:44:42,553+0200 INFO (ioprocess/87552) [IOProcess] (fe9fb0db-2743-457a-80f0-9a4edc509e9d) Starting ioprocess (__init__:434)
2020-08-31 18:44:42,582+0200 ERROR (jsonrpc/5) [storage.HSM] Unexpected error (hsm:2843)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2829, in getStorageDomainsList
dom = sdCache.produce(sdUUID=sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 115, in produce
domain.getRealDomain()
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 139, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 156, in _findDomain
return findMethod(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/nfsSD.py", line 146, in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 390, in __init__
manifest.block_size, storage_block_size)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 939, in _validate_storage_block_size
raise se.StorageDomainBlockSizeMismatch(block_size, storage_block_size)
vdsm.storage.exception.StorageDomainBlockSizeMismatch: Block size does not match storage block size: 'block_size=512, storage_block_size=4096'
2020-08-31 18:44:42,582+0200 INFO (jsonrpc/5) [vdsm.api] FINISH getStorageDomainsList return={'domlist': []} from=::ffff:192.168.0.87,40378, flow_id=eff548f9-b663-45dd-b8a5-5854f9b5dde8, task_id=98ca66a5-edee-40d4-9253-6a46409241cc (api:54)
2020-08-31 18:44:42,582+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageDomains succeeded in 0.52 seconds (__init__:312)
2020-08-31 18:44:42,801+0200 INFO (jsonrpc/7) [vdsm.api] START disconnectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': '********', 'vfs_type': 'xfs', 'port': '', 'mnt_options': 'rw,relatime,seclabel,attr2,inode64,noquota,_netdev,x-systemd.device-timeout=0', 'iqn': '', 'connection': '/dev/mapper/vdo1', 'ipv6_enabled': 'false', 'id': 'b4a73d6a-344c-446b-a33d-7b665e78578d', 'user': '', 'tpgt': '1'}], options=None) from=::ffff:192.168.0.87,40378, flow_id=6678917a-b29c-416c-aa2d-5abc1d5351a3, task_id=3d0cf36e-427d-4000-8857-a8a2af641b10 (api:48)
2020-08-31 18:44:42,801+0200 INFO (jsonrpc/7) [storage.Mount] unmounting /rhev/data-center/mnt/_dev_mapper_vdo1 (mount:215)
2020-08-31 18:44:42,806+0200 INFO (jsonrpc/0) [vdsm.api] START disconnectStorageServer(domType=6, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': '********', 'vfs_type': 'xfs', 'port': '', 'mnt_options': 'rw,relatime,seclabel,attr2,inode64,noquota,_netdev,x-systemd.device-timeout=0', 'iqn': '', 'connection': '/dev/mapper/vdo1', 'ipv6_enabled': 'false', 'id': 'b4a73d6a-344c-446b-a33d-7b665e78578d', 'user': '', 'tpgt': '1'}], options=None) from=::ffff:192.168.0.87,40378, flow_id=bb449405-5b78-4b65-a352-e2e94a269df9, task_id=09783e0a-933c-448d-b0ca-1b5d351eb042 (api:48)
2020-08-31 18:44:42,806+0200 INFO (jsonrpc/0) [storage.Mount] unmounting /rhev/data-center/mnt/_dev_mapper_vdo1 (mount:215)
2020-08-31 18:44:42,821+0200 ERROR (jsonrpc/0) [storage.HSM] Could not disconnect from storageServer (hsm:2528)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2524, in disconnectStorageServer
conObj.disconnect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 201, in disconnect
self._mount.umount(True, True)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 217, in umount
umount(self.fs_file, force=force, lazy=lazy, freeloop=freeloop)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in umount
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.storage.mount.MountError: Command ['/usr/bin/umount', '-f', '-l', '/rhev/data-center/mnt/_dev_mapper_vdo1'] failed with rc=32 out=b'' err=b'umount: /rhev/data-center/mnt/_dev_mapper_vdo1: not mounted.\n'
2020-08-31 18:44:42,821+0200 INFO (jsonrpc/0) [storage.StorageDomainCache] Refreshing storage domain cache (resize=False) (sdc:80)
4 years, 3 months
Promiscuous Mode
by Anton Louw
Hi All,
I would like to deploy a VM that will monitor a specific network. I know in VMware you can set the port group to allow Promiscuous Mode. Is there something similar I can do in oVirt?
Thanks
Anton Louw
Cloud Engineer: Storage and Virtualization
______________________________________
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.louw(a)voxtelecom.co.za
www.vox.co.za
4 years, 3 months
Error exporting into ova
by Gianluca Cecchi
Hello,
I'm playing with export_vm_as_ova.py downloaded from the examples github:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export...
My environment is oVirt 4.3.3.7 with iSCSI storage domain.
It fails leaving an ova.tmp file
In webadmin gui:
Starting to export Vm enginecopy1 as a Virtual Appliance
7/19/1911:55:12 AM
VDSM ov301 command TeardownImageVDS failed: Cannot deactivate Logical
Volume: ('General Storage Exception: ("5 [] [\' Logical volume
fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b
in
use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\',
\'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
7/19/1912:25:36 PM
Failed to export Vm enginecopy1 as a Virtual Appliance to path
/save_ova/base/dump/myvm2.ova on Host ov301
7/19/1912:25:37 PM
During export I have this qemu-img process creating the disk over the loop
device:
root 30878 30871 0 11:55 pts/2 00:00:00 su -p -c qemu-img convert
-T none -O qcow2
'/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b'
'/dev/loop1' vdsm
vdsm 30882 30878 10 11:55 ? 00:00:00 qemu-img convert -T none -O
qcow2
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
/dev/loop1
The ova.tmp file is getting filled while command runs
eg:
[root@ov301 ]# du -sh /save_ova/base/dump/myvm2.ova.tmp
416M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
911M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 ]#
and the final generated / not completed file is in this state:
[root@ov301 ]# qemu-img info /save_ova/base/dump/myvm2.ova.tmp
image: /save_ova/base/dump/myvm2.ova.tmp
file format: raw
virtual size: 30G (32217446400 bytes)
disk size: 30G
[root@ov301 sysctl.d]#
But I notice that the timestamp of the file is about 67 minutes after start
of job and well after the notice of its failure....
[root@ov301 sysctl.d]# ll /save_ova/base/dump/
total 30963632
-rw-------. 1 root root 32217446400 Jul 19 13:02 myvm2.ova.tmp
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
30G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
In engine.log the first error I see is 30 minutes after start
2019-07-19 12:25:31,563+02 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible
playbook execution failed: Timeout occurred while executing Ansible
playbook.
2019-07-19 12:25:31,563+02 INFO
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible
playbook command has exited with value: 1
2019-07-19 12:25:31,564+02 ERROR
[org.ovirt.engine.core.bll.CreateOvaCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Failed to
create OVA. Please check logs for more details:
/var/log/ovirt-engine/ova/ovirt-export-ova-ansible-20190719115531-ov301-2001ddf4.log
2019-07-19 12:25:31,565+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] START,
TeardownImageVDSCommand(HostName = ov301,
ImageActionsVDSCommandParameters:{hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06'}),
log id: 3d2246f7
2019-07-19 12:25:36,569+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ov301 command TeardownImageVDS
failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5
[] [\' Logical volume
fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b
in
use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\',
\'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
In ansible playbook suggested log file I don't see anything useful.
It ends with timestamps when the script has been launched.
Last lines are:
2019-07-19 11:55:33,877 p=5699 u=ovirt | TASK [ovirt-ova-export-pre-pack :
Retrieving the temporary path for the OVA file] ***
2019-07-19 11:55:34,198 p=5699 u=ovirt | changed: [ov301] => {
"changed": true,
"dest": "/save_ova/base/dump/myvm2.ova.tmp",
"gid": 0,
"group": "root",
"mode": "0600",
"owner": "root",
"secontext": "system_u:object_r:nfs_t:s0",
"size": 32217446912,
"state": "file",
"uid": 0
}
2019-07-19 11:55:34,204 p=5699 u=ovirt | TASK [ovirt-ova-pack : Run
packing script] *************************************
It seems 30 minutes... for timeout? About what, ansible job?
Or possibly implicit user session created when running the python script?
The snapshot has been correctly deleted (as I see also in engine.log), I
don't see it in webadmin gui.
Any known problem?
Just for test I executed again at 14:24 and I see same Ansible error at
14:54
The snapshot gets deleted, while the qemu-img command still continues....
[root@ov301 sysctl.d]# ps -ef | grep qemu-img
root 13504 13501 0 14:24 pts/1 00:00:00 su -p -c qemu-img convert
-T none -O qcow2
'/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b'
'/dev/loop0' vdsm
vdsm 13505 13504 3 14:24 ? 00:01:26 qemu-img convert -T none -O
qcow2
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
/dev/loop0
root 17587 24530 0 15:05 pts/0 00:00:00 grep --color=auto qemu-img
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
24G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# ll /save_ova/base/dump/myvm2.ova.tmp
-rw-------. 1 root root 32217446400 Jul 19 15:14
/save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
and then continues until image copy completes, but at this time the job has
already aborted and so the completion of the ova composition doesn't go
ahead... and I remain with the ova.tmp file...
How to extend timeout?
Thanks in advance,
Gianluca
4 years, 3 months
Hosted Engine stuck in Firmware
by Vinícius Ferrão
Hello, I’ve an strange issue with oVirt 4.4.1
The hosted engine is stuck in the UEFI firmware trying to “never” boot.
I think this happened when I changed the default VM mode for the cluster inside the datacenter.
There’s a way to fix this without redeploying the engine?
4 years, 3 months
deprecating export domain?
by Charles Kozler
Hello,
I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated
To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger audience
This seems like a somewhat significant change to make and I am curious
where this is scheduled? Currently, a lot of my backups rely explicitly on
an export domain for online snapshots, so I'd like to plan accordingly
Thanks!
4 years, 3 months
shutdown virtual machine did not display in the list when using virsh list --all and virsh list --state-shutoff
by Kyre
Hello, I am using ovirt 4.3.8. I want to use the virsh command to boot the shutdown virtual machine, but when using virsh list --all and virsh list --state-shutoff, the shutdown virtual machine did not display in the list. What is the cause of this, and how can I display the shutdown virtual machine.
virsh # list --all
Please enter your authentication name: root
Please enter your password:
Id Name State
----------------------------------------------------
4 win10E2 running
virsh # list --state-shutoff
Id Name State
----------------------------------------------------
4 years, 3 months
Ovirt Node 4.4.2 Engine Deployment - Notification Settings
by David White
I finally got oVirt node installed with gluster on a single node.
So that's great progress!
Once that step was complete...
I noticed that the Engine Deployment wizard asks for SMTP settings for where to send notifications. I was kind of surprised that it doesn't allow one to enter any credentials. It looks like this requires an unauthenticated local relay. I don't like that. :) See attached screenshot.
Has there been any talk about adding this into the wizard deployment?
Sent with ProtonMail Secure Email.
4 years, 3 months
i/o wait and slow system
by info@dsdm.ch
Hello,
I'm running a home setup with 3 nodes and 2 SATA SSDs per node.
As storage I'm running glusterfs and 40GBit/s links.
Software Version:4.3.9.4-1.el7
I've a lot of I/O Wait on the nodes (20%) and on the VMs (50%).
gluster volume top vmstore write-perf bs 2014 count 1024 | grep Through
Throughput 635.54 MBps time 0.0032 secs
Throughput 614.89 MBps time 0.0034 secs
Throughput 622.31 MBps time 0.0033 secs
Throughput 643.07 MBps time 0.0032 secs
Throughput 621.75 MBps time 0.0033 secs
Throughput 609.26 MBps time 0.0034 secs
gluster volume top vmstore read-perf bs 2014 count 1024 | grep Through
Throughput 1274.62 MBps time 0.0016 secs
Throughput 1320.32 MBps time 0.0016 secs
Throughput 1203.93 MBps time 0.0017 secs
Throughput 1293.81 MBps time 0.0016 secs
Throughput 1213.14 MBps time 0.0017 secs
Throughput 1193.48 MBps time 0.0017 secs
Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 195e2a05-9667-4b8b-b0b7-82294631de50
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.9.9.101:/gluster_bricks/vmstore/vmstore
Brick2: 10.9.9.102:/gluster_bricks/vmstore/vmstore
Brick3: 10.9.9.103:/gluster_bricks/vmstore/vmstore
Brick4: 10.9.9.101:/gluster_bricks/S4CYNF0M219849L/S4CYNF0M219849L
Brick5: 10.9.9.102:/gluster_bricks/S4CYNF0M219836L/S4CYNF0M219836L
Brick6: 10.9.9.103:/gluster_bricks/S4CYNF0M219801Y/S4CYNF0M219801Y
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enable
Please help me to analyse the root cause.
Many thanks
Metz
4 years, 3 months
Missing model_FLAGS on specific host
by Vinícius Ferrão
Hi,
I’ve an strange issue in one of my hosts, it’s missing a lot of CPU flags that oVirt seems to require:
[root@c4140 ~]# vdsm-client Host getCapabilities | egrep "cpuFlags|cpuModel"
"cpuFlags": "ssse3,mca,ept,pdpe1gb,vmx,clwb,smep,msr,acpi,pge,sse4_2,nopl,cqm_mbm_total,cx16,avx512vl,aperfmperf,xsaves,3dnowprefetch,nonstop_tsc,cmov,mce,intel_pt,avx512f,fpu,pku,tsc,sdbg,erms,pse36,md_clear,apic,sse,pcid,clflushopt,xtopology,pts,monitor,vpid,cpuid,hle,mba,ss,cqm,avx2,ibpb,xgetbv1,flush_l1d,mmx,epb,pti,fxsr,dca,nx,syscall,stibp,mtrr,cx8,sse2,avx,sep,intel_ppin,lm,tm,bts,adx,bmi1,smx,popcnt,pclmulqdq,lahf_lm,mpx,rdseed,cqm_llc,avx512cd,cdp_l3,f16c,invpcid,fsgsbase,cpuid_fault,tm2,smap,dts,pse,xsave,sse4_1,constant_tsc,pat,tsc_deadline_timer,vnmi,avx512dq,dtes64,xsaveopt,ida,pdcm,tpr_shadow,pln,de,x2apic,avx512bw,pae,rdrand,clflush,rdtscp,art,cqm_mbm_local,pebs,ssbd,movbe,pbe,tsc_adjust,vme,ht,est,bmi2,cat_l3,dtherm,ospke,rdt_a,aes,ibrs,rep_good,fma,xtpr,ds_cpl,abm,xsavec,invpcid_single,flexpriority,cqm_occup_llc,pni,rtm,arat,arch_perfmon",
"cpuModel": "Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz",
In a properly working host it have flags like those ones:
model_Westmere-IBRS,model_kvm32,model_core2duo,model_Opteron_G1,model_Broadwell,model_qemu64,model_Broadwell-noTSX,model_Nehalem-IBRS,model_Haswell-IBRS,model_pentium2,model_Broadwell-IBRS,model_Haswell-noTSX,model_Haswell,model_Haswell-noTSX-IBRS,model_Conroe,model_pentium,model_n270,model_Nehalem,model_IvyBridge-IBRS,model_kvm64,model_SandyBridge,model_pentium3,model_Broadwell-noTSX-IBRS,model_qemu32,model_486,model_IvyBridge,model_SandyBridge-IBRS,model_Westmere,model_Penryn,model_Opteron_G2,model_coreduo",
But in this machine it’s totally missing. I know this model_ flags are an oVirt thing, since they aren’t default on the CPUs.
The host machine is a Dell C4140 compute node, the firmware is fully updated, so I’ve done the basic to figure out what’s happening.
Thanks,
4 years, 3 months