Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

Hi Sandro, Thanks for the update. I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present. Can you check if the 'dd' command executed during creation has been recently modified ? I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet. Best Regards, Strahil NikolovOn May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights:http://www.ovirt.org/release/4.3.4/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/ [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbonazzo@redhat.com

Il giorno gio 16 mag 2019 alle ore 18:29 Strahil <hunter86_bg@yahoo.com> ha scritto:
Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present.
Can you check if the 'dd' command executed during creation has been recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Thanks for the feedback, I added a few people to the thread, hopefully they can help on this.
Best Regards, Strahil Nikolov On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights: http://www.ovirt.org/release/4.3.4/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/ [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>

On Thu, 16 May 2019, 10:02 p.m. Sandro Bonazzola, <sbonazzo@redhat.com> wrote:
Il giorno gio 16 mag 2019 alle ore 18:29 Strahil <hunter86_bg@yahoo.com> ha scritto:
Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present.
What is the error? Can I get error log?
May be engine and vdsm log.
Can you check if the 'dd' command executed during creation has been
recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Thanks for the feedback, I added a few people to the thread, hopefully they can help on this.
Best Regards, Strahil Nikolov On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights: http://www.ovirt.org/release/4.3.4/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/ [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>

Engine logs: 2019-05-16 10:25:08,015-05 INFO [org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] (default task-519) [fcde45c4-3b03-4a85-818a-06be560edee4] Lock Acquired to object 'EngineLock:{exclusiveLocks='[localhost:/test=STORAGE_CONNECTION]', sharedLocks=''}' 2019-05-16 10:25:08,152-05 INFO [org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] (default task-519) [fcde45c4-3b03-4a85-818a-06be560edee4] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2019-05-16 10:25:08,153-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-519) [fcde45c4-3b03-4a85-818a-06be560edee4] START, ConnectStorageServerVDSCommand(HostName = boneyard, StorageServerConnectionManagementVDSParameters:{hostId='789a1c44-144d-4fb1-a8c1-26f8ddc06420', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='null', connection='10.50.3.12:/test', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=10.50.3.11:10.50.3.10', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 53674b6e 2019-05-16 10:25:08,430-05 ERROR [org.ovirt.engine.core.services.GlusterEventsWebHookServlet] (default task-521) [] Error processing event data 2019-05-16 10:25:08,433-05 ERROR [org.ovirt.engine.core.services.GlusterEventsWebHookServlet] (default task-521) [] Error processing event data 2019-05-16 10:25:08,436-05 ERROR [org.ovirt.engine.core.services.GlusterEventsWebHookServlet] (default task-521) [] Error processing event data 2019-05-16 10:25:08,438-05 ERROR [org.ovirt.engine.core.services.GlusterEventsWebHookServlet] (default task-521) [] Error processing event data 2019-05-16 10:25:08,438-05 ERROR [org.ovirt.engine.core.services.GlusterEventsWebHookServlet] (default task-523) [] Error processing event data 2019-05-16 10:25:08,485-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-519) [fcde45c4-3b03-4a85-818a-06be560edee4] FINISH, ConnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=0}, log id: 53674b6e 2019-05-16 10:25:08,616-05 INFO [org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] (default task-519) [fcde45c4-3b03-4a85-818a-06be560edee4] Lock freed to object 'EngineLock:{exclusiveLocks='[localhost:/test=STORAGE_CONNECTION]', sharedLocks=''}' 2019-05-16 10:25:08,945-05 INFO [org.ovirt.engine.core.bll.storage.domain.AddGlusterFsStorageDomainCommand] (default task-519) [6eced2f6-a5bb-4826-a90f-c185534b8d9b] Running command: AddGlusterFsStorageDomainCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2019-05-16 10:25:09,031-05 INFO [org.ovirt.engine.core.bll.profiles.AddDiskProfileCommand] (default task-519) [31d993dd] Running command: AddDiskProfileCommand internal: true. Entities affected : ID: 4037f461-2b6d-452f-8156-fcdca820a8a1 Type: StorageAction group CREATE_STORAGE_DISK_PROFILE with role type ADMIN 2019-05-16 10:25:09,099-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-519) [31d993dd] EVENT_ID: USER_ADDED_DISK_PROFILE(10,120), Disk Profile gTest was successfully added (User: telsin@ohgnetworks-authz). 2019-05-16 10:25:09,165-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-519) [31d993dd] START, ConnectStorageServerVDSCommand(HostName = boneyard, StorageServerConnectionManagementVDSParameters:{hostId='789a1c44-144d-4fb1-a8c1-26f8ddc06420', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='d0ab6b05-2486-40f0-9b15-7f150017ec12', connection='10.50.3.12:/test', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=10.50.3.11:10.50.3.10', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 6a121bce 2019-05-16 10:25:09,181-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-519) [31d993dd] FINISH, ConnectStorageServerVDSCommand, return: {d0ab6b05-2486-40f0-9b15-7f150017ec12=0}, log id: 6a121bce 2019-05-16 10:25:09,183-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] START, CreateStorageDomainVDSCommand(HostName = boneyard, CreateStorageDomainVDSCommandParameters:{hostId='789a1c44-144d-4fb1-a8c1-26f8ddc06420', storageDomain='StorageDomainStatic:{name='gTest', id='4037f461-2b6d-452f-8156-fcdca820a8a1'}', args='10.50.3.12:/test'}), log id: 65d17f3 2019-05-16 10:25:09,586-05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] Failed in 'CreateStorageDomainVDS' method 2019-05-16 10:25:09,661-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-519) [31d993dd] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM boneyard command CreateStorageDomainVDS failed: Storage Domain target is unsupported: () 2019-05-16 10:25:09,661-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand' return value 'StatusOnlyReturn [status=Status [code=399, message=Storage Domain target is unsupported: ()]]' 2019-05-16 10:25:09,661-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] HostName = boneyard 2019-05-16 10:25:09,661-05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] Command 'CreateStorageDomainVDSCommand(HostName = boneyard, CreateStorageDomainVDSCommandParameters:{hostId='789a1c44-144d-4fb1-a8c1-26f8ddc06420', storageDomain='StorageDomainStatic:{name='gTest', id='4037f461-2b6d-452f-8156-fcdca820a8a1'}', args='10.50.3.12:/test'})' execution failed: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage Domain target is unsupported: (), code = 399 2019-05-16 10:25:09,661-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-519) [31d993dd] FINISH, CreateStorageDomainVDSCommand, return: , log id: 65d17f3 2019-05-16 10:25:09,661-05 ERROR [org.ovirt.engine.core.bll.storage.domain.AddGlusterFsStorageDomainCommand] (default task-519) [31d993dd] Command 'org.ovirt.engine.core.bll.storage.domain.AddGlusterFsStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage Domain target is unsupported: (), code = 399 (Failed with error StorageDomainTargetUnsupported and code 399) 2019-05-16 10:25:09,663-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-519) [31d993dd] Command [id=7a4ab6a0-d3e5-4608-b90a-2a9ca28be485]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.profiles.DiskProfile; snapshot: 3a037e5b-7411-419e-8e08-0b065bd140f9. 2019-05-16 10:25:09,664-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-519) [31d993dd] Command [id=7a4ab6a0-d3e5-4608-b90a-2a9ca28be485]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: 4037f461-2b6d-452f-8156-fcdca820a8a1. 2019-05-16 10:25:09,664-05 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-519) [31d993dd] Command [id=7a4ab6a0-d3e5-4608-b90a-2a9ca28be485]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: 4037f461-2b6d-452f-8156-fcdca820a8a1. 2019-05-16 10:25:09,722-05 ERROR [org.ovirt.engine.core.bll.storage.domain.AddGlusterFsStorageDomainCommand] (default task-519) [31d993dd] Transaction rolled-back for command 'org.ovirt.engine.core.bll.storage.domain.AddGlusterFsStorageDomainCommand'. 2019-05-16 10:25:09,787-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-519) [31d993dd] EVENT_ID: USER_ADD_STORAGE_DOMAIN_FAILED(957), Failed to add Storage Domain gTest. (User: telsin@ohgnetworks-authz) 2019-05-16 10:25:10,109-05 INFO [org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] Lock Acquired to object 'EngineLock:{exclusiveLocks='[localhost:/test=STORAGE_CONNECTION, d0ab6b05-2486-40f0-9b15-7f150017ec12=STORAGE_CONNECTION]', sharedLocks=''}' 2019-05-16 10:25:10,249-05 INFO [org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 2019-05-16 10:25:10,319-05 INFO [org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] Removing connection 'd0ab6b05-2486-40f0-9b15-7f150017ec12' from database 2019-05-16 10:25:10,320-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] START, DisconnectStorageServerVDSCommand(HostName = boneyard, StorageServerConnectionManagementVDSParameters:{hostId='789a1c44-144d-4fb1-a8c1-26f8ddc06420', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='d0ab6b05-2486-40f0-9b15-7f150017ec12', connection='localhost:/test', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=10.50.3.11:10.50.3.10', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 61e14259 2019-05-16 10:25:10,507-05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] FINISH, DisconnectStorageServerVDSCommand, return: {d0ab6b05-2486-40f0-9b15-7f150017ec12=477}, log id: 61e14259 2019-05-16 10:25:10,576-05 INFO [org.ovirt.engine.core.bll.storage.connection.RemoveStorageServerConnectionCommand] (default task-521) [48279229-4c4a-485f-a087-5785f08638ac] Lock freed to object 'EngineLock:{exclusiveLocks='[localhost:/test=STORAGE_CONNECTION, d0ab6b05-2486-40f0-9b15-7f150017ec12=STORAGE_CONNECTION]', sharedLocks=''}' Although something is up with this cluster, it both thinks and doesn’t think it has a gluster network setup: 2019-05-16 10:25:17,317-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7530dc6d] Could not associate brick 'necropolis-san:/v0/bricks/gv0' of volume 'a81c5451-890f-4066-8841-45d4729e7bbc' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' networks: but on the storage network for the default cluster (and editing this, the gluster role is checked but greyed out. migration is similar if looked at this way too):
On May 16, 2019, at 12:19 PM, Gobinda Das <godas@redhat.com> wrote:
On Thu, 16 May 2019, 10:02 p.m. Sandro Bonazzola, <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote:
Il giorno gio 16 mag 2019 alle ore 18:29 Strahil <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> ha scritto: Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 <http://4.3.3.7/> - where gluster Storage domain fails creation - is still present.
What is the error? Can I get error log? May be engine and vdsm log. Can you check if the 'dd' command executed during creation has been recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Thanks for the feedback, I added a few people to the thread, hopefully they can help on this.
Best Regards, Strahil Nikolov
On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights:http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt <https://twitter.com/ovirt> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/ <http://www.ovirt.org/blog/>
[1] http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/ <http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/>
-- Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> <https://red.ht/sig> <https://redhat.com/summit>
-- Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> <https://red.ht/sig> <https://redhat.com/summit>_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZGF4UVYUBWY5P... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZGF4UVYUBWY5PEG2LOYXBFPJ3ISDX7C/>

On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg@yahoo.com> wrote:
Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present.
What is is this issue? can you provide a link to the bug/mail about it? Can you check if the 'dd' command executed during creation has been
recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Best Regards, Strahil Nikolov On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights: http://www.ovirt.org/release/4.3.4/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/ [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDR...

I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log: 2019-05-16 10:25:08,158-0500 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:48) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/glusterSD/10.50.3.12:_test' (storageServer:168) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/glusterSD/10.50.3.12:_test mode: None (fileUtils:199) 2019-05-16 10:25:08,306-0500 WARN (jsonrpc/1) [storage.StorageServer.MountConnection] Using user specified backup-volfile-servers option (storageServer:275) 2019-05-16 10:25:08,306-0500 INFO (jsonrpc/1) [storage.Mount] mounting 10.50.3.12:/test at /rhev/data-center/mnt/glusterSD/10.50.3.12:_test (mount:204) 2019-05-16 10:25:08,453-0500 INFO (jsonrpc/1) [IOProcessClient] (Global) Starting client (__init__:308) 2019-05-16 10:25:08,460-0500 INFO (ioprocess/5389) [IOProcess] (Global) Starting ioprocess (__init__:434) 2019-05-16 10:25:08,473-0500 INFO (itmap/0) [IOProcessClient] (/glusterSD/10.50.3.12:_test) Starting client (__init__:308) 2019-05-16 10:25:08,481-0500 INFO (ioprocess/5401) [IOProcess] (/glusterSD/10.50.3.12:_test) Starting ioprocess (__init__:434) 2019-05-16 10:25:08,484-0500 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:10.100.90.5,44732, flow_id=fcde45c4-3b03-4a85-818a-06be560edee4, task_id=0582219d-ce68-4951-8fbd-3dce6d102fca (api:54) 2019-05-16 10:25:08,484-0500 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.33 seconds (__init__:312) 2019-05-16 10:25:09,169-0500 INFO (jsonrpc/7) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.11:10.50.3.10', u'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12', u'connection': u'10.50.3.12:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:48) 2019-05-16 10:25:09,180-0500 INFO (jsonrpc/7) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'd0ab6b05-2486-40f0-9b15-7f150017ec12'}]} from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=9eb2f42c-852d-4af6-ae4e-f65d8283d6e0 (api:54) 2019-05-16 10:25:09,180-0500 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:312) 2019-05-16 10:25:09,186-0500 INFO (jsonrpc/5) [vdsm.api] START createStorageDomain(storageType=7, sdUUID=u'4037f461-2b6d-452f-8156-fcdca820a8a1', domainName=u'gTest', typeSpecificArg=u'10.50.3.12:/test', domClass=1, domVersion=u'4', block_size=512, max_hosts=250, options=None) from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:48) 2019-05-16 10:25:09,492-0500 WARN (jsonrpc/5) [storage.LVM] Reloading VGs failed (vgs=[u'4037f461-2b6d-452f-8156-fcdca820a8a1'] rc=5 out=[] err=[' Volume group "4037f461-2b6d-452f-8156-fcdca820a8a1" not found', ' Cannot process volume group 4037f461-2b6d-452f-8156-fcdca820a8a1']) (lvm:442) 2019-05-16 10:25:09,507-0500 INFO (jsonrpc/5) [storage.StorageDomain] sdUUID=4037f461-2b6d-452f-8156-fcdca820a8a1 domainName=gTest remotePath=10.50.3.12:/test domClass=1, block_size=512, alignment=1048576 (nfsSD:86) 2019-05-16 10:25:09,521-0500 INFO (jsonrpc/5) [IOProcessClient] (4037f461-2b6d-452f-8156-fcdca820a8a1) Starting client (__init__:308) 2019-05-16 10:25:09,528-0500 INFO (ioprocess/5437) [IOProcess] (4037f461-2b6d-452f-8156-fcdca820a8a1) Starting ioprocess (__init__:434) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.TaskManager.Task] (Task='ecea28f3-60d4-476d-9ba8-b753b7c9940d') Unexpected error (task:875) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [storage.TaskManager.Task] (Task='ecea28f3-60d4-476d-9ba8-b753b7c9940d') aborting: Task is aborted: 'Storage Domain target is unsupported: ()' - code 399 (task:1181) 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.Dispatcher] FINISH createStorageDomain error=Storage Domain target is unsupported: () (dispatcher:83) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 399) in 0.40 seconds (__init__:312)
On May 16, 2019, at 11:55 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> wrote: Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 <http://4.3.3.7/> - where gluster Storage domain fails creation - is still present.
What is is this issue? can you provide a link to the bug/mail about it?
Can you check if the 'dd' command executed during creation has been recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Best Regards, Strahil Nikolov
On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote: The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights:http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt <https://twitter.com/ovirt> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/ <http://www.ovirt.org/blog/>
[1] http://www.ovirt.org/release/4.3.4/ <http://www.ovirt.org/release/4.3.4/> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/ <http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/>
-- Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <mailto:sbonazzo@redhat.com> <https://red.ht/sig> <https://redhat.com/summit>_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDR... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDRCRUPCUYN4TX5Z3SL6R/> _______________________________________________ Announce mailing list -- announce@ovirt.org To unsubscribe send an email to announce-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/announce@ovirt.org/message/ABFECS5ES4M...

On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed. This is the code doing the check: 98 def validateFileSystemFeatures(sdUUID, mountDir): 99 try: 100 # Don't unlink this file, we don't have the cluster lock yet as it 101 # requires direct IO which is what we are trying to test for. This 102 # means that unlinking the file might cause a race. Since we don't 103 # care what the content of the file is, just that we managed to 104 # open it O_DIRECT. 105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__") 106 oop.getProcessPool(sdUUID).directTouch(testFilePath) 107 except OSError as e: 108 if e.errno == errno.EINVAL: 109 log = logging.getLogger("storage.fileSD") 110 log.error("Underlying file system doesn't support" 111 "direct IO") 112 raise se.StorageDomainTargetUnsupported() 113 114 raise The actual check is done in ioprocess, using: 319 fd = open(path->str, allFlags, mode); 320 if (fd == -1) { 321 rv = fd; 322 goto clean; 323 } 324 325 rv = futimens(fd, NULL); 326 if (rv < 0) { 327 goto clean; 328 } With: allFlags = O_WRONLY | O_CREAT | O_DIRECT See: https://github.com/oVirt/ioprocess/blob/7508d23e19aeeb4dfc180b854a5a92690d2e... According to the error message: Underlying file system doesn't support direct IO We got EINVAL, which is possible only from open(), and is likely an issue opening the file with O_DIRECT. So something is wrong in the files system. To confirm, you can try to do: dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate. Nir
On May 16, 2019, at 11:55 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg@yahoo.com> wrote:
Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present.
What is is this issue? can you provide a link to the bug/mail about it?
Can you check if the 'dd' command executed during creation has been
recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Best Regards, Strahil Nikolov On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be used inproduction.
This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.6 or later * CentOS Linux (or similar) 7.6 or later * oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is already available - oVirt Node is already available[2]
Additional Resources: * Read more about the oVirt 4.3.4 release highlights: http://www.ovirt.org/release/4.3.4/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/ [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
-- Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CIIDR...
_______________________________________________ Announce mailing list -- announce@ovirt.org To unsubscribe send an email to announce-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/announce@ovirt.org/message/ABFECS5ES4M...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RO6PQQ4XQ6KZXR...

On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test: [root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s I’m also able to add the storage domain in ovirt now. I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?

On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic@onholyground.com> wrote:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
I'm not sure who is responsible for changing these settings. oVirt always required directio, and we never had to change anything in gluster. Sahina, maybe gluster changed the defaults? Darrell, please file a bug, probably for RHHI. Nir

https://bugzilla.redhat.com/show_bug.cgi?id=1711054
On May 16, 2019, at 2:17 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote: On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
I'm not sure who is responsible for changing these settings. oVirt always required directio, and we never had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir

In my case the dio is off, but I can still do direct io: [root@ovirt1 glusterfs]# cd /rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/ [root@ovirt1 gluster1:_data__fast]# gluster volume info data_fast | grep dio network.remote-dio: off [root@ovirt1 gluster1:_data__fast]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s Most probably the 2 cases are different. Best Regards,Strahil Nikolov В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer <nsoffer@redhat.com> написа: On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic@onholyground.com> wrote: On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote: On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log: ... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52) The direct I/O check has failed. So something is wrong in the files system. To confirm, you can try to do: dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct This will probably fail with:dd: failed to open '/path/to/mountoint/test': Invalid argument If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate. Nir Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test: [root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s I’m also able to add the storage domain in ovirt now. I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage? I'm not sure who is responsible for changing these settings. oVirt always required directio, and wenever had to change anything in gluster. Sahina, maybe gluster changed the defaults? Darrell, please file a bug, probably for RHHI. Nir

On Fri, May 17, 2019 at 1:12 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic@onholyground.com> wrote:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
I'm not sure who is responsible for changing these settings. oVirt always required directio, and we never had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Hello Darrell & Nir, Do we have a bug available now for this issue ? I just need to make sure performance.strict-o-direct=on is enabled on that volume. Satheesaran Sundaramoorthi Senior Quality Engineer, RHHI-V QE Red Hat APAC <https://www.redhat.com> <https://red.ht/sig>

Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues.I set it on all 4 new gluster volumes and the storage domains were successfully created. I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060 If someone else already opened - please ping me to mark this one as duplicate. Best Regards,Strahil Nikolov В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <budic@onholyground.com> написа: On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote: On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log: ... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52) The direct I/O check has failed. So something is wrong in the files system. To confirm, you can try to do: dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct This will probably fail with:dd: failed to open '/path/to/mountoint/test': Invalid argument If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate. Nir Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test: [root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s I’m also able to add the storage domain in ovirt now. I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5...

From RHHI side default we are setting below volume options: { group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36', network.ping-timeout: '30', performance.strict-o-direct: 'on', network.remote-dio: 'off' } On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues. I set it on all 4 new gluster volumes and the storage domains were successfully created.
I have created bug for that: https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards, Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic < budic@onholyground.com> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CM...
-- Thanks, Gobinda

On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas@redhat.com> wrote:
From RHHI side default we are setting below volume options:
{ group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36', network.ping-timeout: '30', performance.strict-o-direct: 'on', network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with oVirt. Was this tested? }
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues. I set it on all 4 new gluster volumes and the storage domains were successfully created.
I have created bug for that: https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards, Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic < budic@onholyground.com> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CM...
--
Thanks, Gobinda

On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas@redhat.com> wrote:
From RHHI side default we are setting below volume options:
{ group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36', network.ping-timeout: '30', performance.strict-o-direct: 'on', network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with oVirt.
Was this tested?
Yes, this is set by default in all test configuration. We’re checking on the bug, but the error is likely when the underlying device does not support 512b writes. With network.remote-dio off gluster will ensure o-direct writes
}
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues. I set it on all 4 new gluster volumes and the storage domains were successfully created.
I have created bug for that: https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards, Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic < budic@onholyground.com> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote:
I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
...
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CM...
--
Thanks, Gobinda

Wow, I think Strahil and i both hit different edge cases on this one. I was running that on my test cluster with a ZFS backed brick, which does not support O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a XFS backed brick with gluster virt group applied and network.remote-dio disabled and ovirt was able to create the storage volume correctly. So not a huge problem for most people, I imagine. Now I’m curious about the apparent disconnect between gluster and ovirt though. Since the gluster virt group sets network.remote-dio on, what’s the reasoning behind disabling it for these tests?
On May 18, 2019, at 11:44 PM, Sahina Bose <sabose@redhat.com> wrote:
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote: On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas@redhat.com <mailto:godas@redhat.com>> wrote: From RHHI side default we are setting below volume options:
{ group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36', network.ping-timeout: '30', performance.strict-o-direct: 'on', network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with oVirt.
Was this tested?
Yes, this is set by default in all test configuration. We’re checking on the bug, but the error is likely when the underlying device does not support 512b writes. With network.remote-dio off gluster will ensure o-direct writes
}
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg@yahoo.com <mailto:hunter86_bg@yahoo.com>> wrote: Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues. I set it on all 4 new gluster volumes and the storage domains were successfully created.
I have created bug for that: https://bugzilla.redhat.com/show_bug.cgi?id=1711060 <https://bugzilla.redhat.com/show_bug.cgi?id=1711060>
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards, Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
The direct I/O check has failed.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with: dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable volume set: success [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CM... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/>
--
Thanks, Gobinda

I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest gluster volumes were set to 'off' while the older ones are 'on'. Best Regards,Strahil Nikolov В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic <budic@onholyground.com> написа: Wow, I think Strahil and i both hit different edge cases on this one. I was running that on my test cluster with a ZFS backed brick, which does not support O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a XFS backed brick with gluster virt group applied and network.remote-dio disabled and ovirt was able to create the storage volume correctly. So not a huge problem for most people, I imagine. Now I’m curious about the apparent disconnect between gluster and ovirt though. Since the gluster virt group sets network.remote-dio on, what’s the reasoning behind disabling it for these tests? On May 18, 2019, at 11:44 PM, Sahina Bose <sabose@redhat.com> wrote: On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer@redhat.com> wrote: On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas@redhat.com> wrote: From RHHI side default we are setting below volume options: { group: 'virt', storage.owner-uid: '36', storage.owner-gid: '36', network.ping-timeout: '30', performance.strict-o-direct: 'on', network.remote-dio: 'off' According to the user reports, this configuration is not compatible with oVirt. Was this tested? Yes, this is set by default in all test configuration. We’re checking on the bug, but the error is likely when the underlying device does not support 512b writes. With network.remote-dio off gluster will ensure o-direct writes } On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues.I set it on all 4 new gluster volumes and the storage domains were successfully created. I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060 If someone else already opened - please ping me to mark this one as duplicate. Best Regards,Strahil Nikolov В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <budic@onholyground.com> написа: On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer@redhat.com> wrote: On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic@onholyground.com> wrote: I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log: ... 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110) 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52) The direct I/O check has failed. So something is wrong in the files system. To confirm, you can try to do: dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct This will probably fail with:dd: failed to open '/path/to/mountoint/test': Invalid argument If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate. Nir Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test: [root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s I’m also able to add the storage domain in ovirt now. I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CM... -- Thanks,Gobinda _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AADZGBYILJEUJA...
participants (8)
-
Darrell Budic
-
Gobinda Das
-
Nir Soffer
-
Sahina Bose
-
Sandro Bonazzola
-
Satheesaran Sundaramoorthi
-
Strahil
-
Strahil Nikolov