Wow, I think Strahil and i both hit different edge cases on this one. I was running that
on my test cluster with a ZFS backed brick, which does not support O_DIRECT (in the
current version, 0.8 will, when it’s released). I tested on a XFS backed brick with
gluster virt group applied and network.remote-dio disabled and ovirt was able to create
the storage volume correctly. So not a huge problem for most people, I imagine.
Now I’m curious about the apparent disconnect between gluster and ovirt though. Since the
gluster virt group sets network.remote-dio on, what’s the reasoning behind disabling it
for these tests?
On May 18, 2019, at 11:44 PM, Sahina Bose <sabose(a)redhat.com>
wrote:
On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer(a)redhat.com
<mailto:nsoffer@redhat.com>> wrote:
On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas(a)redhat.com
<mailto:godas@redhat.com>> wrote:
From RHHI side default we are setting below volume options:
{ group: 'virt',
storage.owner-uid: '36',
storage.owner-gid: '36',
network.ping-timeout: '30',
performance.strict-o-direct: 'on',
network.remote-dio: 'off'
According to the user reports, this configuration is not compatible with oVirt.
Was this tested?
Yes, this is set by default in all test configuration. We’re checking on the bug, but the
error is likely when the underlying device does not support 512b writes.
With network.remote-dio off gluster will ensure o-direct writes
}
On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg(a)yahoo.com
<mailto:hunter86_bg@yahoo.com>> wrote:
Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to
create the storage domain without any issues.
I set it on all 4 new gluster volumes and the storage domains were successfully created.
I have created bug for that:
https://bugzilla.redhat.com/show_bug.cgi?id=1711060
<
https://bugzilla.redhat.com/show_bug.cgi?id=1711060>
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,
Strahil Nikolov
В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic
<budic(a)onholyground.com <mailto:budic@onholyground.com>> написа:
On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer(a)redhat.com
<mailto:nsoffer@redhat.com>> wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com
<mailto:budic@onholyground.com>> wrote:
> I tried adding a new storage domain on my hyper converged test cluster running Ovirt
4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not
able to add the gluster storage domain (as either a managed gluster volume or directly
entering values). The created gluster volume mounts and looks fine from the CLI. Errors in
VDSM log:
>
> ...
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file
system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain
error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732,
flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and we will
investigate.
>
> Nir
Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I
poked around at gluster settings for the new volume. It has network.remote-dio=off set on
the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd
test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enable
volume set: success
[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not
getting set by ovirt duding the volume creation/optimze for storage?
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4...
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M4...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M4...
--
Thanks,
Gobinda