Hello All,
I recently took a new job in a RedHat shop and I'd like to move all of my
homelab systems to RedHat upstream products to better align with what I
manage at work. I had a "custom" (aka - hacked together) 3-node
Hyperconverged XenServer cluster and would like to get this moved over to
Ovirt (I'm currently testing with 4.2.7). Unfortunately, my storage is
limited to software RAID with a 128GB SSD for cache. If at all possible, I
would prefer to use ZFS (RAIDZ+ZIL+L2ARC) instead of MD RAID + lvmcache,
however I'm not able to get this working and I'm not sure why. My ZFS and
Gluster configuration is working - at least where I can manually mount all
of my gluster volumes from all of my nodes, however, hosted-engine --deploy
fails. I understand this isn't an out of the box configuration for Ovirt,
however I see no reason why this shouldn't work. I would think this would
be no different than using any other Gluster volume for the engine
datastore, Am I missing something that would prevent this from working?
[ INFO ] TASK [Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Storage Domain target is unsupported]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"deprecations":
[{"msg": "The 'ovirt_storage_domains' module is being renamed
'ovirt_storage_domain'", "version": 2.8}], "msg":
"Fault reason is
\"Operation Failed\". Fault detail is \"[Storage Domain target is
unsupported]\". HTTP response code is 400."}
Even though it fails, it appears to have mounted and written
__DIRECT_IO_TEST__ to my Gluster volume:
[root@vmh1 ~]# mount -t glusterfs localhost:/engine /mnt/engine/
[root@vmh1 ~]# ls /mnt/engine/
__DIRECT_IO_TEST__
If I cancel and try to run the deploy again, I get a different failure:
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Error creating a storage domain]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"deprecations":
[{"msg": "The 'ovirt_storage_domains' module is being renamed
'ovirt_storage_domain'", "version": 2.8}], "msg":
"Fault reason is
\"Operation Failed\". Fault detail is \"[Error creating a storage
domain]\". HTTP response code is 400."}
Gluster seems ok...
[root@vmh1 /]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 2e34f8f5-0129-4ba5-983f-1eb5178deadc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vmh1-ib:/zpool1/engine
Brick2: vmh2-ib:/zpool1/engine
Brick3: vmh3-ib:/zpool1/engine
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
ZFS looks good too...
[root@vmh1 ~]# ansible ovirthosts -m shell -a 'zpool status' -b
vmh1 | CHANGED | rc=0 >>
pool: zpool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zpool1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
logs
sdb2 ONLINE 0 0 0
cache
sdb1 ONLINE 0 0 0
errors: No known data errors
vmh3 | CHANGED | rc=0 >>
pool: zpool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zpool1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
logs
sdb2 ONLINE 0 0 0
cache
sdb1 ONLINE 0 0 0
errors: No known data errors
vmh2 | CHANGED | rc=0 >>
pool: zpool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zpool1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
logs
sdb2 ONLINE 0 0 0
cache
sdb1 ONLINE 0 0 0
errors: No known data errors
Permissions seem ok
[root@vmh1 ~]# ansible ovirthosts -m shell -a 'ls -n /zpool1' -b
vmh3 | CHANGED | rc=0 >>
total 2
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso
vmh1 | CHANGED | rc=0 >>
total 2
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso
vmh2 | CHANGED | rc=0 >>
total 2
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 data
drwxr-xr-x. 3 36 36 4 Nov 15 06:31 engine
drwxr-xr-x. 3 36 36 3 Nov 15 04:56 iso