On Wed, Oct 2, 2013 at 12:07 AM, Jason Brooks wrote:
I'm having this issue on my ovirt 3.3 setup (two node, one is
AIO,
GlusterFS storage, both on F19) as well.
Jason
I uploaded all my day logs to bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1007980
I can reproduce the problem and the migrate phase is what generates
apparently files to heal in gluster volume
VM is running on f18ovn03
before migrate
[root@f18ovn01 vdsm]# gluster volume heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
Start migrate that fails with the "iface" error
And now you see:
[root@f18ovn01 vdsm]# gluster volume heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1
Number of entries: 1
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids
Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1
Number of entries: 1
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids
While on f18ovn03:
[root@f18ovn03 vdsm]# gluster volume heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick 10.4.4.58:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
Brick 10.4.4.59:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
Gianluca