On Tue, Oct 22, 2013 at 12:14 PM, Gianluca Cecchi wrote:
On Mon, Oct 21, 2013 at 11:33 AM, Vijay Bellur wrote:
> The following commands might help:
Thanks for your commands.
In the mean time I noticed that as I changed gluster sources to allow
live migration, offsetting the port to 50152+
as in
http://review.gluster.org/#/c/6075/
referred in
https://bugzilla.redhat.com/show_bug.cgi?id=1018178
I missed the iptables rules that contained:
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
So I had some gluster communication problems too.
I updated it at the moment to
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
until libvirt for fedora19 get patched as upstream
(a quick test on rebuilding libvirt-1.0.5.6-3.fc19.src.rpm with the
patches proposed gave many failure and in the mean time I saw that
also some other parts are to be patched...)
I put in iptables these lines
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
Strangely, after fixing iptables rules, still te second node has
problems with the VM iage file where I ran the command
[g.cecchi at c6s ~]$ sudo time dd if=/dev/zero bs=1024k count=3096 of=/testfile
3096+0 records in
3096+0 records out
3246391296 bytes (3.2 GB) copied, 42.3414 s, 76.7 MB/s
0.01user 7.99system 0:42.34elapsed 18%CPU (0avgtext+0avgdata 7360maxresident)k
0inputs+6352984outputs (0major+493minor)pagefaults 0swaps
If I delete the img file that has the delta:
[root@f18ovn03 /]# find /gluster/DATA_GLUSTER/brick1/ -samefile
/gluster/DATA_GLUSTER/brick1/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49
-print -delete
/gluster/DATA_GLUSTER/brick1/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49
/gluster/DATA_GLUSTER/brick1/.glusterfs/f4/c1/f4c1b1a4-7328-4d6d-8be8-6b7ff8271d51
then auto heal happens:
[root@f18ovn03 15f9ca1c-c435-4892-9eb7-0c84583b2a7d]# gluster volume
heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 2
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids
Brick f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 1
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/dom_md/ids
[root@f18ovn03 15f9ca1c-c435-4892-9eb7-0c84583b2a7d]# gluster volume
heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 1
/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/15f9ca1c-c435-4892-9eb7-0c84583b2a7d/a123801a-0a4d-4a47-a426-99d8480d2e49
Brick f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
at the end:
[root@f18ovn03 15f9ca1c-c435-4892-9eb7-0c84583b2a7d]# gluster volume
heal gvdata info
Gathering Heal info on volume gvdata has been successful
Brick f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
Brick f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1
Number of entries: 0
But
[root@f18ovn03 15f9ca1c-c435-4892-9eb7-0c84583b2a7d]# qemu-img info
a123801a-0a4d-4a47-a426-99d8480d2e49
image: a123801a-0a4d-4a47-a426-99d8480d2e49
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 1.4G
[root@f18ovn01 15f9ca1c-c435-4892-9eb7-0c84583b2a7d]# qemu-img info
a123801a-0a4d-4a47-a426-99d8480d2e49
image: a123801a-0a4d-4a47-a426-99d8480d2e49
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 4.2G
any problem here?
Gianluca