Well, personally I know ZFS and I don’t know VDO. Going to have to check it out now that I
know it exists, sounds interesting to have it at the dm layer. I don’t use it exclusively
either, but have been finding it useful for compression.
What is your source for the statement that COW filesystems have downsides over time for VM
workloads?
On Nov 16, 2018, at 3:11 PM, Donny Davis <donny(a)fortnebula.com>
wrote:
Why not just use the built in stuff like VDO. What benefits does ZFS bring for the use
case?
For most vm based workloads ZFS is the opposite of ideal over the lifecycle of a VM. COW
filesystems have downsides over time.
On Thu, Nov 15, 2018 at 6:09 PM Darrell Budic <budic(a)onholyground.com
<mailto:budic@onholyground.com>> wrote:
I did this in the past and didn’t have any trouble with gluster/ZFS, but 4.2.x probably
does more validation.
I recommend these settings on your zfs volumes, I set mine at the root(v0 here) and let
them inherit:
required:
v0 xattr sa local
v0 acltype posixacl local
optional but I recommend them:
v0 relatime on local
v0 compression lz4 local
For gluster, I think it checks that it’s been “optimized for virt storage”. Either apply
the virt group or set the options you’ll find in /var/lib/glusterd/groups/virt.
Note that I don’t recommend the default settings for cluster.shd-max-threads &
cluster.shd-wait-qlength. They can swamp your machines during heals unless you have a lot
of cores and ram. You get a slightly faster heal, but often have VMs pausing for storage
or other odd ball storage related errors. I prefer max-threads = 1 or maybe 2, and
wait-qlength=1024 or 2048. These are per volume, so they hit harder than you think they
will if you have a lot of volumes running.
Also make sure the gluster volumes themselves got set to 36.36 for owner.group, doesn’t
matter for the bricks. Can do it with volume settings or mount the volume and set it
manually.
Hope it helps!
-Darrell
> On Nov 15, 2018, at 6:28 AM, Thomas Simmons <twsnnva(a)gmail.com
<mailto:twsnnva@gmail.com>> wrote:
>
> Hello All,
>
> I recently took a new job in a RedHat shop and I'd like to move all of my homelab
systems to RedHat upstream products to better align with what I manage at work. I had a
"custom" (aka - hacked together) 3-node Hyperconverged XenServer cluster and
would like to get this moved over to Ovirt (I'm currently testing with 4.2.7).
Unfortunately, my storage is limited to software RAID with a 128GB SSD for cache. If at
all possible, I would prefer to use ZFS (RAIDZ+ZIL+L2ARC) instead of MD RAID + lvmcache,
however I'm not able to get this working and I'm not sure why. My ZFS and Gluster
configuration is working - at least where I can manually mount all of my gluster volumes
from all of my nodes, however, hosted-engine --deploy fails. I understand this isn't
an out of the box configuration for Ovirt, however I see no reason why this shouldn't
work. I would think this would be no different than using any other Gluster volume for the
engine datastore, Am I missing something that would prevent this from working?
>
> [ INFO ] TASK [Add glusterfs storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Storage Domain target is unsupported]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"deprecations": [{"msg": "The 'ovirt_storage_domains'
module is being renamed 'ovirt_storage_domain'", "version": 2.8}],
"msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[Storage Domain target is unsupported]\". HTTP response code is 400."}
>
> Even though it fails, it appears to have mounted and written __DIRECT_IO_TEST__ to my
Gluster volume:
>
> [root@vmh1 ~]# mount -t glusterfs localhost:/engine /mnt/engine/
> [root@vmh1 ~]# ls /mnt/engine/
> __DIRECT_IO_TEST__
>
> If I cancel and try to run the deploy again, I get a different failure:
>
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Error creating a storage domain]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"deprecations": [{"msg": "The 'ovirt_storage_domains'
module is being renamed 'ovirt_storage_domain'", "version": 2.8}],
"msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[Error creating a storage domain]\". HTTP response code is 400."}
>
> Gluster seems ok...
>
> [root@vmh1 /]# gluster volume info engine
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 2e34f8f5-0129-4ba5-983f-1eb5178deadc
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: vmh1-ib:/zpool1/engine
> Brick2: vmh2-ib:/zpool1/engine
> Brick3: vmh3-ib:/zpool1/engine
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> ZFS looks good too...
>
> [root@vmh1 ~]# ansible ovirthosts -m shell -a 'zpool status' -b
> vmh1 | CHANGED | rc=0 >>
> pool: zpool1
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> zpool1 ONLINE 0 0 0
> sdc ONLINE 0 0 0
> sdd ONLINE 0 0 0
> sde ONLINE 0 0 0
> logs
> sdb2 ONLINE 0 0 0
> cache
> sdb1 ONLINE 0 0 0
>
> errors: No known data errors
>
> vmh3 | CHANGED | rc=0 >>
> pool: zpool1
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> zpool1 ONLINE 0 0 0
> sdc ONLINE 0 0 0
> sdd ONLINE 0 0 0
> sde ONLINE 0 0 0
> logs
> sdb2 ONLINE 0 0 0
> cache
> sdb1 ONLINE 0 0 0
>
> errors: No known data errors
>
> vmh2 | CHANGED | rc=0 >>
> pool: zpool1
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> zpool1 ONLINE 0 0 0
> sdc ONLINE 0 0 0
> sdd ONLINE 0 0 0
> sde ONLINE 0 0 0
> logs
> sdb2 ONLINE 0 0 0
> cache
> sdb1 ONLINE 0 0 0
>
> errors: No known data errors
>
> Permissions seem ok
>
> [root@vmh1 ~]# ansible ovirthosts -m shell -a 'ls -n /zpool1' -b
> vmh3 | CHANGED | rc=0 >>
> total 2
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
> drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso
>
> vmh1 | CHANGED | rc=0 >>
> total 2
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
> drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso
>
> vmh2 | CHANGED | rc=0 >>
> total 2
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
> drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
> drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEZLAUIUSLR...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEZLAUIUSLR...
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
<
https://www.ovirt.org/site/privacy-policy/>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFZ7P6RFZ7X...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFZ7P6RFZ7X...