[root@ovirt1 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs1.gluster.private:/gluster_bricks/
data/data 49152 0 Y 3205
Brick gfs2.gluster.private:/gluster_bricks/
data/data 49152 0 Y 3193
Brick gfs3.gluster.private:/gluster_bricks/
data/data 49152 0 Y 3240
Self-heal Daemon on localhost N/A N/A Y 3637
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771
Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs1.gluster.private:/gluster_bricks/
engine/engine 49153 0 Y 3216
Brick gfs2.gluster.private:/gluster_bricks/
engine/engine 49153 0 Y 3206
Brick gfs3.gluster.private:/gluster_bricks/
engine/engine 49153 0 Y 3251
Self-heal Daemon on localhost N/A N/A Y 3637
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771
Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs1.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0 Y 3225
Brick gfs2.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0 Y 3235
Brick gfs3.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0 Y 3264
Self-heal Daemon on localhost N/A N/A Y 3637
Self-heal Daemon on gfs3.gluster.private N/A N/A Y 17586
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 17771
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
[root@ovirt1 ~]#
Is this a DNS issue, the back end runs on the same physical network which would be OK but is it OK for the Engine ?
I tried setting up the last step with the backend FQDN it fails
I tried setting up via the front end it fails
nslookup from LAN
Robs-Air:~ rob$
Robs-Air:~ rob$ nslookup gfs1.gluster.private 192.168.100.1
Server: 192.168.100.1
Address: 192.168.100.1#53
Name: gfs1.gluster.private
Address: 10.10.45.11
Robs-Air:~ rob$
gluster volume status