
Greetings, My setup is a complete Red Hat install. Manager OS: RHEL 7.5 Hypervisors OS: RHEL 7.5 Running Red Hat CephFS (with their Ceph repos on all of the systems) with Red Hat Virtualization (aka oVirt). Everything is fully patched and updated as of yesterday morning. Yes, I have valid Red Hat support but I figured this was an odd enough problem that the community (and the Red-Hat-ers who hang out on this list) might have a better idea of where to start. (Although I might open a ticket anyway just because that is what support is for, right? :) Quick background: Your /etc/fstab when you mount a nfs should probably look something like this: <your.nfs.ip.addr>:/path/ /mount/point nfs <various options> 0 0 Just one IP is needed. Since part of the redundancy for Ceph is in the monitors, to mount CephFS the fstab should look something like this: <your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/ /mount/point ceph <various options> 0 0 Both the Ceph community and Red Hat recommend the comma separator for mounting multiple CephFS monitor nodes. (See section 4.2 point 3) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ce... Now to oVirt/RHV. When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>:/path/" it works splendidly well (especially after the last Red Hat kernel update!). I've done a bunch of stuff to it and it seems to work every time. However, I don't have the redundancy of multiple Ceph Monitors. When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/" most things seem to work. But I noticed a higher rate of failures. The only failure that I can trigger 100% of the time though is to mount a second data import domain and attempt to copy a vm disk from the import into the CephFS Data domain. Then I get an error like this: would VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error: ParamsList: sep , in /rhev/data-center/mnt/<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addr3>:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',) Uh, oh. It seems that the commas in the mount path are causing the problems. So I went looking through the logs for "sep , in" and found a bunch more hits which makes me think that this is actually the problem message. I've switched back to just one IP address for the time being but I obviously want the Ceph redundancy back. While running on just one IP, the vm disk that refused to copy before had no problem copying. The _only_ change I made was dropping two of the three IP's from the Data Domain path option. Is this a bug, or did I do something wrong? Does anyone have a suggestion for me to try? Thank you! ~Stack~

On Wed, 29 Aug 2018, 15:48 Stack Korora, <stackkorora@disroot.org> wrote:
Greetings,
My setup is a complete Red Hat install. Manager OS: RHEL 7.5 Hypervisors OS: RHEL 7.5 Running Red Hat CephFS (with their Ceph repos on all of the systems) with Red Hat Virtualization (aka oVirt). Everything is fully patched and updated as of yesterday morning.
Yes, I have valid Red Hat support but I figured this was an odd enough problem that the community (and the Red-Hat-ers who hang out on this list) might have a better idea of where to start. (Although I might open a ticket anyway just because that is what support is for, right? :)
Quick background:
Your /etc/fstab when you mount a nfs should probably look something like this: <your.nfs.ip.addr>:/path/ /mount/point nfs <various options> 0 0
Just one IP is needed. Since part of the redundancy for Ceph is in the monitors, to mount CephFS the fstab should look something like this:
<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/ /mount/point ceph <various options> 0 0
Both the Ceph community and Red Hat recommend the comma separator for mounting multiple CephFS monitor nodes. (See section 4.2 point 3)
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ce...
Now to oVirt/RHV.
When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>:/path/" it works splendidly well (especially after the last Red Hat kernel update!). I've done a bunch of stuff to it and it seems to work every time. However, I don't have the redundancy of multiple Ceph Monitors.
When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/" most things seem to work. But I noticed a higher rate of failures. The only failure that I can trigger 100% of the time though is to mount a second data import domain and attempt to copy a vm disk from the import into the CephFS Data domain. Then I get an error like this:
would VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error: ParamsList: sep , in
/rhev/data-center/mnt/<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addr3>:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
Uh, oh. It seems that the commas in the mount path are causing the problems. So I went looking through the logs for "sep , in" and found a bunch more hits which makes me think that this is actually the problem message.
I've switched back to just one IP address for the time being but I obviously want the Ceph redundancy back. While running on just one IP, the vm disk that refused to copy before had no problem copying. The _only_ change I made was dropping two of the three IP's from the Data Domain path option.
Is this a bug, or did I do something wrong?
Looks like a bug,aybe vdsm is not parsing the mount spec correctly. Please file vdsm bug and attach vdsm logs showing the entire flow. Regardless, I'm not sure how well oVirt with cephfs is tested, or recommended. Adding Yaniv t9 add more info on this. Nir
Does anyone have a suggestion for me to try?
Thank you! ~Stack~ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVKOIQIDEH5ZV...

On 08/29/2018 09:28 AM, Nir Soffer wrote:
On Wed, 29 Aug 2018, 15:48 Stack Korora, <stackkorora@disroot.org <mailto:stackkorora@disroot.org>> wrote:
Greetings,
My setup is a complete Red Hat install. Manager OS: RHEL 7.5 Hypervisors OS: RHEL 7.5 Running Red Hat CephFS (with their Ceph repos on all of the systems) with Red Hat Virtualization (aka oVirt). Everything is fully patched and updated as of yesterday morning.
Yes, I have valid Red Hat support but I figured this was an odd enough problem that the community (and the Red-Hat-ers who hang out on this list) might have a better idea of where to start. (Although I might open a ticket anyway just because that is what support is for, right? :)
Quick background:
Your /etc/fstab when you mount a nfs should probably look something like this: <your.nfs.ip.addr>:/path/ /mount/point nfs <various options> 0 0
Just one IP is needed. Since part of the redundancy for Ceph is in the monitors, to mount CephFS the fstab should look something like this:
<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/ /mount/point ceph <various options> 0 0
Both the Ceph community and Red Hat recommend the comma separator for mounting multiple CephFS monitor nodes. (See section 4.2 point 3) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ce...
Now to oVirt/RHV.
When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>:/path/" it works splendidly well (especially after the last Red Hat kernel update!). I've done a bunch of stuff to it and it seems to work every time. However, I don't have the redundancy of multiple Ceph Monitors.
When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/" most things seem to work. But I noticed a higher rate of failures. The only failure that I can trigger 100% of the time though is to mount a second data import domain and attempt to copy a vm disk from the import into the CephFS Data domain. Then I get an error like this:
would VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error: ParamsList: sep , in /rhev/data-center/mnt/<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addr3>:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',)
Uh, oh. It seems that the commas in the mount path are causing the problems. So I went looking through the logs for "sep , in" and found a bunch more hits which makes me think that this is actually the problem message.
I've switched back to just one IP address for the time being but I obviously want the Ceph redundancy back. While running on just one IP, the vm disk that refused to copy before had no problem copying. The _only_ change I made was dropping two of the three IP's from the Data Domain path option.
Is this a bug, or did I do something wrong?
Looks like a bug,aybe vdsm is not parsing the mount spec correctly.
Please file vdsm bug and attach vdsm logs showing the entire flow.
Regardless, I'm not sure how well oVirt with cephfs is tested, or recommended.
Adding Yaniv t9 add more info on this.
Nir
Thank you. I can file a report today.

Hi, maybe a foolish guess: Did you try this https://www.spinics.net/lists/ceph-devel/msg30958.html Mit freundlichen Grüßen, Markus Stockhausen Head of Software Technology Ubierring 11 · 50678 Köln Telefon: +49 221 33 608 611 Mobil: +49 151 12040606 Mail: markus.stockhausen@collogia.de Web: www.collogia-it-services.de ________________________________________ Von: Stack Korora [stackkorora@disroot.org] Gesendet: Mittwoch, 29. August 2018 14:39 An: users Betreff: [ovirt-users] Multiple CephFS Monitors cause issues with oVirt Greetings, My setup is a complete Red Hat install. Manager OS: RHEL 7.5 Hypervisors OS: RHEL 7.5 Running Red Hat CephFS (with their Ceph repos on all of the systems) with Red Hat Virtualization (aka oVirt). Everything is fully patched and updated as of yesterday morning. Yes, I have valid Red Hat support but I figured this was an odd enough problem that the community (and the Red-Hat-ers who hang out on this list) might have a better idea of where to start. (Although I might open a ticket anyway just because that is what support is for, right? :) Quick background: Your /etc/fstab when you mount a nfs should probably look something like this: <your.nfs.ip.addr>:/path/ /mount/point nfs <various options> 0 0 Just one IP is needed. Since part of the redundancy for Ceph is in the monitors, to mount CephFS the fstab should look something like this: <your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/ /mount/point ceph <various options> 0 0 Both the Ceph community and Red Hat recommend the comma separator for mounting multiple CephFS monitor nodes. (See section 4.2 point 3) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ce... Now to oVirt/RHV. When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>:/path/" it works splendidly well (especially after the last Red Hat kernel update!). I've done a bunch of stuff to it and it seems to work every time. However, I don't have the redundancy of multiple Ceph Monitors. When I mount my Data Domain path as a Posix file system with a path of "<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addrX>:/path/" most things seem to work. But I noticed a higher rate of failures. The only failure that I can trigger 100% of the time though is to mount a second data import domain and attempt to copy a vm disk from the import into the CephFS Data domain. Then I get an error like this: would VDSM ovirt01 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u'Destination volume 7c1bb510-9f35-4456-8d51-0955f788ac3e error: ParamsList: sep , in /rhev/data-center/mnt/<your.ceph.ip.addr1>,<your.ceph.ip.addr2>,<your.ceph.ip.addr3>:_ovirt_data/70fb34ad-e66d-43e6-8412-5e020baa34df/images/23991a68-0c43-433f-b1f9-48b1533da54a',) Uh, oh. It seems that the commas in the mount path are causing the problems. So I went looking through the logs for "sep , in" and found a bunch more hits which makes me think that this is actually the problem message. I've switched back to just one IP address for the time being but I obviously want the Ceph redundancy back. While running on just one IP, the vm disk that refused to copy before had no problem copying. The _only_ change I made was dropping two of the three IP's from the Data Domain path option. Is this a bug, or did I do something wrong? Does anyone have a suggestion for me to try? Thank you! ~Stack~ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVKOIQIDEH5ZV...

On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
Hi,
maybe a foolish guess: Did you try this
https://www.spinics.net/lists/ceph-devel/msg30958.html
Mit freundlichen Grüßen,
Markus Stockhausen Head of Software Technology
Thanks, I thought about that but I have not tried it. I will add it to my list to check today and will report back if it works (though I don't see why it wouldn't). It is good to know that someone else has at least had success with having a DNS entry for the multiple CephFS monitor hosts. ~Stack~

On 08/29/2018 10:44 AM, Stack Korora wrote:
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
Hi,
maybe a foolish guess: Did you try this
https://www.spinics.net/lists/ceph-devel/msg30958.html
Mit freundlichen Grüßen,
Markus Stockhausen Head of Software Technology Thanks, I thought about that but I have not tried it. I will add it to my list to check today and will report back if it works (though I don't see why it wouldn't). It is good to know that someone else has at least had success with having a DNS entry for the multiple CephFS monitor hosts.
A single DNS entry did not work. Red Hat's oVirt did not like mounting it even though it works fine via command line. :-/ I now have a Red Hat ticket open so we will see what happens on that front. Thanks! ~Stack~

On Thu, Aug 30, 2018 at 1:24 AM Stack Korora <stackkorora@disroot.org> wrote:
On 08/29/2018 10:44 AM, Stack Korora wrote:
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
Hi,
maybe a foolish guess: Did you try this
https://www.spinics.net/lists/ceph-devel/msg30958.html
Mit freundlichen Grüßen,
Markus Stockhausen Head of Software Technology Thanks, I thought about that but I have not tried it. I will add it to my list to check today and will report back if it works (though I don't see why it wouldn't). It is good to know that someone else has at least had success with having a DNS entry for the multiple CephFS monitor hosts.
A single DNS entry did not work. Red Hat's oVirt did not like mounting it even though it works fine via command line. :-/
I now have a Red Hat ticket open so we will see what happens on that front.
I can confirm that multiple hosts:port in a mount spec is not supported by the current code. You can see all the supported formats here: https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196eb03da708f/... Nir

Hi, I think that there's already a bug on this issue: *Bug 1577529* <https://bugzilla.redhat.com/show_bug.cgi?id=1577529> - [RFE] Support multiple hosts in posix storage domain path for cephfs Regards, Idan On Thu, Aug 30, 2018 at 1:41 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, Aug 30, 2018 at 1:24 AM Stack Korora <stackkorora@disroot.org> wrote:
On 08/29/2018 10:44 AM, Stack Korora wrote:
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
Hi,
maybe a foolish guess: Did you try this
https://www.spinics.net/lists/ceph-devel/msg30958.html
Mit freundlichen Grüßen,
Markus Stockhausen Head of Software Technology Thanks, I thought about that but I have not tried it. I will add it to my list to check today and will report back if it works (though I don't see why it wouldn't). It is good to know that someone else has at least had success with having a DNS entry for the multiple CephFS monitor hosts.
A single DNS entry did not work. Red Hat's oVirt did not like mounting it even though it works fine via command line. :-/
I now have a Red Hat ticket open so we will see what happens on that front.
I can confirm that multiple hosts:port in a mount spec is not supported by the current code.
You can see all the supported formats here: https://github.com/oVirt/vdsm/blob/d43376f3b2e913f3ee0ef226b5c196 eb03da708f/tests/storage/fileutil_test.py#L182
Nir
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/IEHMS5UZIF4HXYY7YEP6H66TMU74DAWW/

Hm, why should OVirt mount differently to OS? You just provide the Mount options and I expect that the Python scripts wrap that to a normal Command. Strange. Btw. Can you add me to the BZ? Markus Am 30.08.2018 00:25 schrieb Stack Korora <stackkorora@disroot.org>: On 08/29/2018 10:44 AM, Stack Korora wrote:
On 08/29/2018 10:14 AM, Markus Stockhausen wrote:
Hi,
maybe a foolish guess: Did you try this
https://www.spinics.net/lists/ceph-devel/msg30958.html
Mit freundlichen Grüßen,
Markus Stockhausen Head of Software Technology Thanks, I thought about that but I have not tried it. I will add it to my list to check today and will report back if it works (though I don't see why it wouldn't). It is good to know that someone else has at least had success with having a DNS entry for the multiple CephFS monitor hosts.
A single DNS entry did not work. Red Hat's oVirt did not like mounting it even though it works fine via command line. :-/ I now have a Red Hat ticket open so we will see what happens on that front. Thanks! ~Stack~ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RFPEUFOIGHKA6...
participants (4)
-
Idan Shaby
-
Markus Stockhausen
-
Nir Soffer
-
Stack Korora