Hi all:
Sorry for yet another semi-related message to the list. In my attempts to troubleshoot and verify some suspicions on the nature of the performance problems I posted under "Major Performance Issues with gluster", I attempted to move one of my problem VM's back to the original storage (SSD-backed). It appeared to be moving fine, but last night froze at 84%. This morning (8hrs later), its still at 84%.
I need to get that VM back up and running, but I don't know how...It seems to be stuck in limbo.
The only thing I explicitly did last night as well that may have caused an issue is finally set up and activated georep to an offsite backup machine. That too seems to have gone a bit wonky. On the ovirt server side, it shows normal with all but data-hdd show a last sync'ed time of 3am (which matches my bandwidth graphs for the WAN connections involved). data-hdd (the new disk-backed storage with most of my data in it) shows not yet synced, but I'm also not currently seeing bandwidth usage anymore.
I logged into the georep destination box, and found system load a bit high, a bunch of gluster and rsync processes running, and both data and data-hdd using MORE disk space than the origional (data-hdd using 4x more disk space than is on the master node). Not sure what to do about this; I paused the replication from the cluster, but that hasn't seem to had an effect on the georep destination.
I promise I'll stop trying things until I get guidance from the list! Please do help; I need the VM HDD unstuck so I can start it.
Thanks!
--Jim