<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Itamar, I am addressing this to you because one of your assignments
seems to be to coordinate other oVirt contributors when dealing with
issues that are raised on the ovirt-users email list.<br>
<br>
As you are aware, there is an ongoing split-brain problem with
running sanlock on replicated gluster storage. Personally, I
believe that this is the 5th time that I have been bitten by this
sanlock+gluster problem.<br>
<br>
I believe that the following are true (if not, my entire request is
probably off base).<br>
<ul>
<li>ovirt uses sanlock in such a way that when the sanlock storage
is on a replicated gluster file system, very small storage
disruptions can result in a gluster split-brain on the sanlock
space</li>
<ul>
<li>gluster is aware of the problem, and is working on a
different way of replicating data, which will reduce these
problems.</li>
</ul>
<li>most (maybe all) of the sanlock locks have a short duration,
measured in seconds</li>
<li>there are only a couple of things that a user can safely do
from the command line when a file is in split-brain</li>
<ul>
<li>delete the file</li>
<li>rename (mv) the file<br>
</li>
</ul>
<li>x</li>
</ul>
<u>How did I get into this mess?</u><br>
<br>
had 3 hosts running ovirt 3.3<br>
each hosted VMs<br>
gluster replica 3 storage<br>
engine was external to cluster<br>
upgraded 3 hosts from ovirt 3.3 to 3.4<br>
hosted-engine deploy<br>
used new gluster volume (accessed via nfs) for storage<br>
storage was accessed using localhost:engVM1 link (localhost
was probably a poor choice)<br>
created new engine on VM (did not transfer any data from old
engine)<br>
added 3 hosts to new engine via web-gui<br>
ran above setup for 3 days<br>
shut entire system down before I left on vacation (holiday)<br>
came back from vacation<br>
powered on hosts<br>
found that iptables did not have rules for gluster access<br>
(a continuing problem if host installation is allowed to set up
firewall)<br>
added rules for gluster<br>
glusterfs now up and running<br>
added storage manually<br>
tried "hosted-engine --vm-start"<br>
vm did not start<br>
logs show sanlock errors<br>
"gluster volume heal engVM1full:<br>
"gluster volume heal engVM1 info split-brain" showed 6 files in
split-brain<br>
all 5 prefixed by /rhev/data-center/mnt/localhost\:_engVM1<br>
UUID/dom_md/ids<br>
UUID/images/UUID/UUID (VM hard disk)<br>
UUID/images/UUID/UUID.lease<br>
UUID/ha_agent/hosted-engine.lockspace<br>
UUID/ha_agent/hosted-engine.metadata<br>
I copied each of the above files off of each of the three bricks to
a safe place (15 files copied)<br>
I renamed the 5 files on /rhev/....<br>
I copied the 5 files from one of the bricks to /rhev/<br>
files can now be read OK (e.g. cat ids)<br>
sanlock.log shows error sets like these:<br>
<pre>2014-05-20 03:23:39-0400 36199 [2843]: s3358 lockspace 5ebb3b40-a394-405b-bbac-4c0e21ccd659:1:/rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids:0
2014-05-20 03:23:39-0400 36199 [18873]: open error -5 /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids
2014-05-20 03:23:39-0400 36199 [18873]: s3358 open_disk /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids error -5
2014-05-20 03:23:40-0400 36200 [2843]: s3358 add_lockspace fail result -19</pre>
I am now stuck<br>
<br>
What I would like to see in ovirt to help me (and others like me).
Alternates listed in order from most desirable (automatic) to least
desirable (set of commands to type, with lots of variables to figure
out).<br>
<br>
1. automagic recovery<br>
<ul>
<li> When a host is not able to access sanlock, it writes a small
"problem" text file into the shared storage</li>
<ul>
<li>the host-ID as part of the name (so only one host ever
accesses that file)</li>
<li>a status number for the error causing problems</li>
<li>time stamp</li>
<li>time stamp when last sanlock lease will expire</li>
<li>if sanlock is able to access the file, the "problem" file is
deleted</li>
</ul>
<li>when time passes for its last sanlock lease to be expired,
highest number host does a survey</li>
<ul>
<li>did all other hosts create "problem" files?</li>
<li>do all "problem" files show same (or compatible) error codes
related to file access problems?</li>
<li>are all hosts communicating by network?</li>
<li>if yes to all above</li>
</ul>
<li>delete all sanlock storage space<br>
</li>
<li>initialize sanlock from scratch</li>
<li>restart whatever may have given up because of sanlock</li>
<li>restart VM if necessary</li>
</ul>
<p>2. recovery subcommand<br>
</p>
<ul>
<li>add "hosted-engine --lock-initialize" command that would
delete sanlock, start over from scratch</li>
</ul>
<p>3. script<br>
</p>
<ul>
<li>publish a script (in ovirt packages or available on web)
which, when run, does all (or most) of the recovery process
needed.</li>
</ul>
<p>4. commands<br>
</p>
<ul>
<li>publish on the web a "recipe" for dealing with files that
commonly go split-brain</li>
<ul>
<li>ids</li>
<li>*.lease</li>
<li>*.lockspace</li>
</ul>
</ul>
<p>Any chance of any help on any of the above levels?<br>
</p>
<p>Ted Miller<br>
Elkhart, IN, USA<br>
<br>
</p>
</body>
</html>