On 5 Jul 2017, at 05:35, Yaniv Kaul <ykaul@redhat.com> wrote:
On Wed, Jul 5, 2017 at 7:12 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Adding another question to what Matthias has said.
I also noted that oVirt (and RHV) documentation does not mention the supported block size on iSCSI domains.
I’m interested on 4K blocks over iSCSI, but this isn’t really widely supported. The question is: oVirt supports this? Or should we stay with the default 512 bytes of block size?
It does not.Y.
Discovered this with the hard way, the system is able to detect it as 4K LUN, but ovirt-hosted-engine-setup gets confused:
[2]36589cfc00000071cbf2f2ef314a62 12c 1600GiB FreeNAS iSCSI Disk
status: free, paths: 4 active
[3]36589cfc00000043589992bce09176 478 200GiB FreeNAS iSCSI Disk
status: free, paths: 4 active
[4]36589cfc000000992f7abf38c11295 bb6 400GiB FreeNAS iSCSI Disk
status: free, paths: 4 active
[2] is 4k[3] is 512bytes[4] is 1k (just to prove the point)
On the system it’s appears to be OK:
Disk /dev/mapper/36589cfc00000071cbf2f2ef314a62 12c: 214.7 GB, 214748364800 bytes, 52428800 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
Disk /dev/mapper/36589cfc00000043589992bce09176 478: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 1048576 bytes
But whatever, just reporting back to the list. It’s a good ideia to have a note about it on the documentation.
V.
Thanks,V.
On 4 Jul 2017, at 09:10, Matthias Leopold <matthias.leopold@meduniwien.ac.at > wrote:
Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:
On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão <ferrao@if.ufrj.br <mailto:ferrao@if.ufrj.br>> wrote:
Thanks, Konstantin.
Just to be clear enough: the first deployment would be made on
classic eth interfaces and later after the deployment of Hosted
Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?
Another question: what about iSCSI Multipath on Self Hosted Engine?
I've looked through the net and only found this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1193961
<https://bugzilla.redhat.com/show_bug.cgi?id=1193961 >
Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?
It's probably not the most tested path but once you have an engine you should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond configuration.
A different story is instead having ovirt-ha-agent connecting multiple IQNs or multiple targets over your SAN. This is currently not supported for the hosted-engine storage domain.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579
Hi Simone,
i think my post to this list titled "iSCSI multipathing setup troubles" just recently is about the exact same problem, except i'm not talking about the hosted-engine storage domain. i would like to configure _any_ iSCSI storage domain the way you describe it in https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1 . i would like to do so using the oVirt "iSCSI Multipathing" GUI after everything else is setup. i can't find a way to do this. is this now possible? i think the iSCSI Multipathing documentation could be improved by describing an example IP setup for this.
thanks a lot
matthias
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users