VM paused then killed with "device vda reported I/O error"
by fsoyer
Hi,
related (or maybe not) with my problem "VMs unexpectidly restarted", I have one VM (only one) which was paused then killed this morning (1:01AM).
This is the second time (first time about 15 days ago), only this one (it is on a domain with 5 others VMs, and it is not the most used of them. And it was at night, without any particular treatment at this time). The others VMs on the same storage were not impacted at all. And it is not on the same storage domain as the other VM of "VMs unexpectidly restarted"...
At the same time, gluster seems to have seen absolutly nothing. Is there really a storage issue ??
Here are some revelant logs
/var/log/messages of the node
vdsm.log
engine.log
glusterd.log
data01-brick.log
For recall, this is a 3 nodes 4.2.3 cluster, on Gluster 3.12.13 (2+arbiter).
Any idea where or what I must search for ?
Thanks
--
Cordialement,
Frank
6 years
SR-IOV / No free VF (pci=assign-busses issue ?)
by jeanbaptiste@nfrance.com
Hello, I'm facing an issue since days regarding SR-IOV.
On a specific configuration, I'm unable to start VM with a VF.
Tyan S5510 + Intel E3 1220 (vt-x and vt-d enabled).
I've enable 63 VF (intel X520-DA2). Engine interface show me :
Enabled virtual function: 63
Free Virtual function: 63
I've create a Network + vNIC profile (Passthrough enabled)
I have the same issue if I force mapping "Allowed Networks / All networks / Specific networks" or not (on host setup network)
When I try to start VM Engine error message is:
Failed to run VM test due to a failed validation: [Cannot run VM. There is no host that satisfies current scheduling constraints. See below for details:, The host sriov-hp1.admin-lan.nfrance.com did not satisfy internal filter Network because there are no free virtual functions which are suitable for virtual nic(s) nic1. A virtual function is considered as suitable if the VF's configuration of its physical function contains the virtual nic's network/network label.] (User: admin@internal-authz).
Problem seem not be related to host. VF are visible into ip l on my host.
Unfortunately, I have no representative error log on side host or engine log (expect error above on engine log)
For information, kernel parameters on host are: intel_iommu=on pci=assign-busses. pci=assign-busses is mandatory on this motherboard otherwise VF cannot be created.
Do you have idea ?
6 years
Engine restore failing
by Maton, Brett
oVirt: 4.2.6.2-1
I'm Moving hosted engine from one storage domain to another by backing up
and restoring the engine.
New VM provisioned in new storage domain, I get as far as trying to restore
the backup:
but am getting this DbJustRestored Error:
engine-backup --mode=restore --file=engine.backup --log=engine-restore.log
--provision-db --provision-dwh-db --restore-permissions
Preparing to restore:
- Unpacking file 'engine.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'ovirt_engine_history', database 'ovirt_engine_history'
Restoring:
- Engine database 'engine'
- Cleaning up temporary tables in engine database 'engine'
FATAL: Failed checking DbJustRestored
restore log attached, any suggestions ?
6 years
Re: Storage domain mount error: Lustre file system(Posix compliant FS)
by 김형근
Um...
Does the gluster included in version 4.2.7 improve small file I/O performance?
If that 's the case, I will test you as you say.
Finally, I want to ask one more time.
How can I use lustre?
보내는사람 : Yaniv Kaul <ykaul(a)redhat.com>
받는사람 : '김형근'<okok102928(a)fusiondata.co.kr>
참조 : Ovirt Users <users(a)ovirt.org>
보낸 날짜 : 2018-10-29 16:57:57
제목 : Re: [ovirt-users] Storage domain mount error: Lustre file system(Posix compliant FS)
On Thu, Oct 25, 2018 at 7:25 AM <okok102928(a)fusiondata.co.kr> wrote:
Hi.
I am an ovirt user in Korea. I am working on VDI. It's a pleasure to meet you, the ovirt specialists.
(I do not speak English well... Thank you for your understanding!)
I am testing Lustre File System in Ovirt or RH(E)V environment.
(The reason is simple: glusterfs and nfs have limit of performance, SAN Storage and excellent Software Defined Storage are quite expensive.)
Please re-test Gluster with latest 4.2.7, where we have improved performance nicely specifically for Gluster based storage.
Testing for file system performance was successful.
As expected, luster showed amazing performance.
However, there was an error adding luster storage to the storage domain as Posix compliant FS.
Domain Function : Data
Storage Type : POSIX compliant FS
Host to Use : [SPM_HOSTNAME]
Name : [STORAGE_DOMAIN_NAME]
Path : 10.10.10.15@tcp:/lustre/vmstore
VFS Type : lustre
Mount Options :
The vdsm debug logs are shown below.
2018-10-25 12:46:58,963+0900 INFO (jsonrpc/2) [storage.xlease] Formatting index for lockspace u'c0ef7ee6-1da9-4eef-9e03-387cd3a24445' (version=1) (xlease:653)
2018-10-25 12:46:58,971+0900 DEBUG (jsonrpc/2) [root] /usr/bin/dd iflag=fullblock of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync (cwd None) (commands:65)
2018-10-25 12:46:58,985+0900 DEBUG (jsonrpc/2) [root] FAILED: <err> = "/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"; <rc> = 1 (commands:86)
2018-10-25 12:46:58,985+0900 INFO (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" from=::ffff:192.168.161.104,52188, flow_id=794bd395, task_id=c9847bf3-2267-483b-9099-f05a46981f7f (api:50)
2018-10-25 12:46:58,985+0900 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Unexpected error (task:875)
Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2591, in createStorageDomain
storageType, domVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 87, in create
remotePath, storageType, version)
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 465, in _prepareMetadata
cls.format_external_leases(sdUUID, xleases_path)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1200, in format_external_leases
xlease.format_index(lockspace, backend)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 661, in format_index
index.dump(file)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 761, in dump
file.pwrite(INDEX_BASE, self._buf)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 994, in pwrite
self._run(args, data=buf[:])
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1011, in _run
raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Task._run: c9847bf3-2267-483b-9099-f05a46981f7f (6, u'c0ef7ee6-1da9-4eef-9e03-387cd3a24445', u'vmstore', u'10.10.10.15@tcp:/lustre/vmstore', 1, u'4') {} failed - stopping task (task:894)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') stopping in state failed (force False) (task:1256)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') ref 1 aborting True (task:1002)
2018-10-25 12:46:58,986+0900 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') aborting: Task is aborted: u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', u\'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases\', \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', \'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' err="/usr/bin/dd: error writing \'/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases\': Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Prepare: aborted: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10..10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" (task:1186)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') ref 0 aborting True (task:1002)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Task._doAbort: force False (task:937)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.cancelAll requests {} (resourceManager:947)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') moving from state failed -> state aborting (task:602)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') _aborting: recover policy none (task:557)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') moving from state failed -> state failed (task:602)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.releaseAll requests {} resources {} (resourceManager:910)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.cancelAll requests {} (resourceManager:947)
2018-10-25 12:46:58,987+0900 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" (dispatcher:86)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 73, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in prepare
raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"
2018-10-25 12:46:58,987+0900 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 351) in 0.41 seconds (__init__:573)
2018-10-25 12:46:59,058+0900 DEBUG (jsonrpc/3) [jsonrpc.JsonRpcServer] Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 6} (__init__:590)
2018-10-25 12:46:59,058+0900 WARN (jsonrpc/3) [devel] Provided value "6" not defined in StorageDomainType enum for StoragePool.disconnectStorageServer (vdsmapi:275)
2018-10-25 12:46:59,058+0900 WARN (jsonrpc/3) [devel] Provided parameters {u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''} do not match any of union ConnectionRefParameters values (vdsmapi:275)
2018-10-25 12:46:59,059+0900 DEBUG (jsonrpc/3) [storage.TaskManager.Task] (Task='3c6b249f-a47f-47f1-a647-5893b6f60b7c') moving from state preparing -> state preparing (task:602)
2018-10-25 12:46:59,059+0900 INFO (jsonrpc/3) [vdsm.api] START disconnectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.161.104,52188, flow_id=f1bf4bf8-9033-42af-9329-69960638ba0e, task_id=3c6b249f-a47f-47f1-a647-5893b6f60b7c (api:46)
2018-10-25 12:46:59,059+0900 INFO (jsonrpc/3) [storage.Mount] unmounting /rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore (mount:212)
2018-10-25 12:46:59,094+0900 DEBUG (jsonrpc/3) [storage.Mount] /rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore unmounted: 0.03 seconds (utils:452)
1. If you use the direct flag when using the dd command, luster only works in multiples of 4k (4096).
This is it, VDSM does not support native 4K.
Y.
Therefore, bs = 256512, which is not a multiple of 4096, causes an error.
2. Ovirt / RH(E)V An error occurs regardless of version. I have tested both in 3.6, 4.1, and 4.2 environments.
I searched hard, but I did not have a similar case. I want to ask three questions.
1. Is there a way to fix the problem in Ovirt? (Secure bypass or some configurations)
2. (Extension of Question # 1) Where does block size 256512 come from? Why is it 256512?
3. Is this a problem you need to solve in the Lustre file system? (For example, a setting capable of direct IO in units of 512 bytes)
I need help. Thank you for your reply.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2XVAJGYIJEZ...
6 years
Unable to export vm as ova
by sangeetha.rathinavel@oracle.com
When I try to export a vm to ova, get this below error in engine log
2018-10-26 12:11:36,657-04 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [484d3716] EngineException: ENGINE (Failed with error ENGINE and code 5001): org.ovirt.engine.core.common.errors.EngineException: EngineException: ENGINE (Failed with error ENGINE and code 5001)
at org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.createOva(ExportOvaCommand.java:301) [bll.jar:]
at org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.executeNextOperation(ExportOvaCommand.java:285) [bll.jar:]
at org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.performNextOperation(ExportOvaCommand.java:277) [bll.jar:]
at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:]
at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:68) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:146) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:107) [bll.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_181]
In Ansible log
writing disk: path=/rhev/data-center/mnt/ca-ovmstor101:_export_sanrathi_brml11g04-oVirt-nfs-01/aec73d1f-ea3d-4228-a151-a41b20b6b67b/images/e1186fdb-3fc3-46ae-945c-dd19bb6d434c/d95c52e3-35d1-493e-ab2d-42577bdb0391 size=196848
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1540570288.02-200413742597669/pack_ova.py", line 96, in <module>
write_disks(ova_path, disks_info.split('+'))
File "/root/.ansible/tmp/ansible-tmp-1540570288.02-200413742597669/pack_ova.py", line 79, in write_disks
write_disk(ova_path, disk_path, disk_size)
File "/root/.ansible/tmp/ansible-tmp-1540570288.02-200413742597669/pack_ova.py", line 59, in write_disk
fd = os.open(disk_path, os.O_RDONLY | os.O_DIRECT)
OSError: [Errno 13] Permission denied: '/rhev/data-center/mnt/ca-ovmstor101:_export_sanrathi_brml11g04-oVirt-nfs-01/aec73d1f-ea3d-4228-a151-a41b20b6b67b/images/e1186fdb-3fc3-46ae-945c-dd19bb6d434c/d95c52e3-35d1-493e-ab2d-42577bdb0391'
I have my storage in NFS .
Note: Export to Domain works fine.
Thanks in advance .
6 years
GetGlusterLocalLogicalVolumeListVDSCommand execution failed: null
by Marco Lorenzo Crociani
Hi,
I'm updating ovirt to 4.2.6.4-1.el7
Engine is updated
Compute nodes are updated
Storage nodes are not yet updated because ovirt 4.2.6.4-1.el7 depends on
gluster 3.12 while now I have gluster 3.10.
Compatibility Versions:
Compute datacenter 4.2
Storage 4.1 (because I don't have yet updated the storage cluster)
so Data Centers is still 4.1
I was about to upgrade the storage nodes when I noticed that
/var/log/ovirt-engine/engine.log is flooded with errors like:
2018-10-25 19:36:17,164+02 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler2) [55e89794] Command
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = s23,
VdsIdVDSCommandParametersBase:{hostId='84a33357-1d04-44db-b0e3-4638ebc39d6c'})'
execution failed: null
6 errors (one for each storage server) every 6 seconds.
How could I fix them?
Can I continue to upgrade the storage nodes?
Best regards,
--
Marco Crociani
6 years
User Authorization error when adding iSCSI storage
by Tryfon Farmakakis
Hello. I am using oVirt 4.2.6.4. I have added successfully one Centos 7.5 host. I am trying to create a new Storage Domain to add a local storage device on the host which has an iSCSI target configured on top. Everything seems to proceed fine, iSCSI target is discovered by oVirt and LUN is added and then when I click the OK button the operation fails and shows a user authentication error message which doesn't make any sense since I am logged-in in oVirt as an administrator. Here are the relevant lines from the ovirt engine log file. Any help appreciated:
2018-10-30 06:26:32,492-04 WARN [org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand] (default task-13) [da237443-21e0-4777-a0d4-3c6ecf6c63bd] Validation of action 'AddSANStorageDomain' failed for user admin@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_DISCARD_AFTER_DELETE_NOT_SUPPORTED_BY_UNDERLYING_STORAGE,$storageDomainName VicinityStorageDomain
2018-10-30 06:26:32,713-04 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-13) [772f360b-453c-407a-90ac-b28a3915e377] No permission found for user '0ad3bf0c-d5e1-11e8-afa0-005056af2f08' or one of the groups he is member of, when running action 'AttachStorageDomainToPool', Required permissions are: Action type: 'ADMIN' Action group: 'MANIPULATE_STORAGE_DOMAIN' Object type: 'Storage' Object ID: 'null'.
2018-10-30 06:26:32,713-04 WARN [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-13) [772f360b-453c-407a-90ac-b28a3915e377] Validation of action 'AttachStorageDomainToPool' failed for user admin@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ATTACH,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
A few extra minor issues that manifest after clicking OK is that I get a warning that the underlying device does not support discard after delete, even though I had that unchecked in the previous dialogue window and then I get a warning that the LUN exists already (doesn't make sense since the new Storage Domain was never created). Moreover the iSCSI initiator that oVirt created on my host is still there and I have to tear it down manually.
6 years
Ovirt - Bonded NICs - using as tagged and untagged at the same time
by pslawrence@comcast.net
Hello,
I have searched and have not found an answer with respects to Ovirt and VLAN tagging. I'm hoping someone will be able to assist.
I am using two 10GB interfaces bonded 802.3ad for our second interface. I have created several interfaces that are tagged with specific VLANs and 1 interface called 20GBBond that is untagged.
The issue we are experiencing is when I assign a vlan to an interface I can no longer assign it to the 20GBBond on a VM.
For example, ETH1 is tagged with VLAN70 is tagged and assigned to VM1. 20GBBond untagged interface is assigned to VM2 and within the VM i create the tagged sub interface eth1.70.
Once I create the single ETH1 VLAN70 interface on VM1, I can no longer use the sub interface eth1.70 created on VM2.
Is there a way to get around this if it is a limitation.
6 years