And as a complement for the last message, having 500GB of SLOG is almost irrelevant.

SLOG isn’t a write cache mechanism, it’s a write offload one. The maximum size used by SLOG is equal to the maximum transaction group that you could have.

I don’t know the maths to measure this for you, but I do know that a saturated 10GbE link will only generate 8GB of SLOG.

The reason you want to use a bigger device for SLOG is endurance and some vendors only achieve full performance with bigger devices. It’s not the capacity you’re looking for.

This information is not easily gathered on the web, but TrueNAS documentation is getting better, as you could see here: https://www.truenas.com/docs/references/slog/

Please excuse-me if any information I provided is wrong about SLOG on ZFS. I don’t think it is but I haven’t rechecked it for a while (maybe 3-5 years).

Sent from my iPhone

On 3 Mar 2022, at 00:28, Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:

 David do yourself a favor a move away from NFS on TrueNAS for VM hosting.

As a personal experience hosting VMs on NFS may cause your entire infrastructure to be down if you change something on TrueNAS, even adding a new NFS share may trigger a NFS server restart and suddenly all your VMs will be trashed. Emphasis on _may_.

I’ve been using the product since FreeNAS 8, which was 2012 and that’s observed behavior.

Also oVirt has its quirks with iSCSI, mainly on MPIO (Multipath I/O) but as for the combination with TrueNAS just stick with iSCSI.

Sent from my iPhone

On 3 Mar 2022, at 00:02, David Johnson <djohnson@maxistechnology.com> wrote:


The cluster is on nfs today, with 500gb NVME SiLOG. Under heavy IO the vm's are thrown into paused state instead of iowait. A prior email chain identified a code error in qemu, with a repro using nothing more than DD to set 2 gb on the virtual disk to 0's .

Since the point of the system is to handle massive IO workloads, this is obviously not acceptable.

If there is a way to make the nfs Mount more robust I'm all for it over the headaches that go with managing block io.

On Wed, Mar 2, 2022, 8:46 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Mar 2, 2022 at 3:01 PM David Johnson <djohnson@maxistechnology.com> wrote:
Good morning folks, and thank you in advance.

I am working on migrating my oVirt backing store from NFS to iSCSI.

oVirt Environment:
oVirt Open Virtualization Manager
Software Version:4.4.4.7-1.el8
TrueNAS environment:
FreeBSD truenas.local 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS amd64

The iSCSI share is on a TrueNAS server, exposed to user VDSM and group 36.

oVirt sees the targeted share, but is unable to make use of it.

The latest issue is "Error while executing action New SAN Storage Domain: Volume Group block size error, please check your Volume Group configuration, Supported block size is 512 bytes."

As near as I can tell, oVirt does not support any block size other than 512 bytes, while TrueNAS's smallest OOB block size is 4k.

This is correct, oVirt does not support 4k block storage.
 

I know that oVirt on TrueNAS is a common configuration, so I expect I am missing something really obvious here, probably a TrueNAS configuration needed to make TrueNAS work with 512 byte blocks.

Any advice would be helpful.

You can use NFS exported by TrueNAS. With NFS the underlying block size is hidden
since direct I/O on NFS does not perform direct I/O on the server.

Another way is to use Managed Block Storage (MBS) - if there a Cinder driver that can manage
your storage server, you can use MBS disks with any block size. The block size limit comes from
the traditional lvm based storage domain code. When using MBS, you use one LUN per disk, and
qemu does not have any issue working with such LUNs. 

Check with TrueNAS if they support emulating 512 block size of have another way to 
support clients that do not support 4k storage.

Nir
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6NLGE4Q2ABJ2DEP7MXFRZ3QLQNP37A5V/