
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log: 2019-05-16 10:25:08,158-0500 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:48) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/glusterSD/10.50.3.12:_test' (storageServer:168) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/glusterSD/10.50.3.12:_test mode: None (fileUtils:199) 2019-05-16 10:25:08,306-0500 WARN (jsonrpc/1) [storage.StorageServer.MountConnection] Using user specified backup-volfile-servers option (storageServer:275) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.Mount] mounting 10.50.3.12:/test at /rhev/data-center/mnt/glusterSD/10.50.3.12:_test (mount:204) 2019-05-16 10:25:08,453-0500 INFO (jsonrpc/1) [IOProcessClient] (Global) Starting client (__init__:308) 2019-05-16 10:25:08,460-0500 INFO (ioprocess/5389) [IOProcess] (Global) Starting ioprocess (__init__:434) 2019-05-16 10:25:08,473-0500 INFO (itmap/0) [IOProcessClient] (/glusterSD/10.50.3.12:_test) Starting client (__init__:308) 2019-05-16 10:25:08,481-0500 INFO (ioprocess/5401) [IOProcess] (/glusterSD/10.50.3.12:_test) Starting ioprocess (__init__:434) 2019-05-16 10:25:08,484-0500 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:10.100.90.5,44732, flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:54) 2019-05-16 10:25:08,484-0500 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.33 seconds (__init__:312) 2019-05-16 10:25:09,169-0500 INFO (jsonrpc/7) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', u'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12', u'connection': u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:48) 2019-05-16 10:25:09,180-0500 INFO (jsonrpc/7) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12'}]} from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:54) 2019-05-16 10:25:09,180-0500 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:312) 2019-05-16 10:25:09,186-0500 INFO (jsonrpc/5) [vdsm.api] START createStorageDomain(storageType=7, sdUUID=u'4037f461-2b6d-452f-8156-fcdca820a8a1', domainName=u'gTest', typeSpecificArg=u'10.50.3.12:/test', domClass=1, domVersion=u'4', block_size=512, max_hosts=250, options=None) from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:48) 2019-05-16 10:25:09,492-0500 WARN (jsonrpc/5) [storage.LVM] Reloading VGs failed (vgs=[u'4037f461-2b6d-452f-8156-fcdca820a8a1'] rc=5 out=[] err=[' Volume group "4037f461-2b6d-452f-8156-fcdca820a8a1" not found', ' Cannot process volume group 4037f461-2b6d-452f-8156-fcdca820a8a1']) (lvm:442) 2019-05-16 10:25:09,507-0500 INFO (jsonrpc/5) [storage.StorageDomain] sdUUID=4037f461-2b6d-452f-8156-fcdca820a8a1 domainName=gTest remotePath=10.50.3.12:/test domClass=1, block_size=512, alignment=1048576 (nfsSD:86) 2019-05-16 10:25:09,521-0500 INFO (jsonrpc/5) [IOProcessClient] (4037f461-2b6d-452f-8156-fcdca820a8a1) Starting client (__init__:308) 2019-05-16 10:25:09,528-0500 INFO (ioprocess/5437) [IOProcess] (4037f461-2b6d-452f-8156-fcdca820a8a1) Starting ioprocess (__init__:434) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.TaskManager.Task] (Task='ecea28f3-60d4-476d-9ba8-b753b7c9940d') Unexpected error (task:875) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [storage.TaskManager.Task] (Task='ecea28f3-60d4-476d-9ba8-b753b7c9940d') aborting: Task is aborted: 'Storage Domain target is unsupported: ()' - code 399 (task:1181) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.Dispatcher] FINISH createStorageDomain error=Storage Domain target is unsupported: () (dispatcher:83) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 399) in 0.40 seconds (__init__:312)
On May 16, 2019, at 11:55 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> wrote: Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 <http://4.3.3.7/> - where gluster Storage domain fails creation - is still present.
What is is this issue? can you provide a link to the bug/mail about it?
Can you check if the 'dd' command executed during creation has been recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Best Regards, Strahil Nikolov
On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights:http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt <https://twitter.com/ovirt> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/ <http://www.ovirt.org/blog/>
[1] http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/ <http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/>
-- Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> <https://red.ht/sig> <https://redhat.com/summit>_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDR... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDRCRUPCUYN4TX5Z3SL6R/> _______________________________________________ Announce mailing list -- announce@ovirt.org To unsubscribe send an email to announce-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/announce@ovirt.org/message/ABFECS5ES4M...