I just did a rolling upgrade of my gluster storage cluster to the latest
3.5 bits. This all seems to have gone smoothly and all the volumes are on
line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the
vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no
issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gluster0:/export/brick0/vm-store 49158 Y 2675
Brick gluster1:/export/brick4/vm-store 49158 Y 2309
NFS Server on localhost 2049 Y 27012
Self-heal Daemon on localhost N/A Y 27019
NFS Server on gluster0 2049 Y 12875
Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------
There are no active volume tasks
As I mentioned the vms are running happily
initially the ISOs volume had the same issue. I did a volume start and
stop on the volume as it was not being activly used and that cleared up the
issue in the console. However, as I have VMs running I can't so this for
the vm-store volume.
Any suggestions, Alastair