On Thu, Oct 17, 2013 at 5:45 PM, Gianluca Cecchi wrote:
Hello,
One engine and two hosts all with updated f19 (despite on their names)
and ovirt updates-testing repo enabled.
So I have 3.3.0.1-1 and vdsm-4.12.1-4
kernel is 3.11.2-201.fc19.x86_64 (problems booting with latest
3.11.4-201.fc19.x86_64)
Storage domain configured with gluster as in f19 (3.4.1-1.fc19.x86_64
recompiled binding to port 50152+) and distributed replicated bricks
I do this kind of operations:
- power off all VMs (to start clean)
- put both hosts in maintenance
- shutdown both hosts
- startup one host
- activate one host in webadmin gui
after about 2-3 minutes delay it comes up and so it has its own
gluster copy active
- power on VM and write 3Gb on it
[snip]
- power on second host
I missed the iptables rules that contained:
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
So I had some gluster communication problems too.
I updated it at the moment to
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
until libvirt for fedora19 get patched as upstream
I put in iptables these lines
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 49152:49251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
So that I can test both gluster and live migration.
I'm going to retest completely and see how it behaves.
Gianluca