This is a multi-part message in MIME format.
--------------090203080600010300040807
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
On 10/12/2015 06:43 PM, Nico wrote:
Le 2015-10-12 14:04, Nir Soffer a écrit :
>
> Yes, engine will let you use such volume in 3.5 - this is a bug. In
> 3.6 you will
> not be able to use such setup.
>
> replica 2 fails in a very bad way when one brick is down; the
> application may get
> stale data, and this breaks sanlock. You will be get stuck with spm
> that cannot be
> stopped and other fun stuff.
>
> You don't want to go in this direction, and we will not be able to
> support that.
>
>> here the last entries of vdsm.log
>
> We need the whole file.
>
> I suggest you file an ovirt bug and attach the full vdsm log file
> showing the timeframe of
> the error. Probably from the time you created the glusterfs domain.
>
> Nir
>
Please find the full logs there:
https://94.23.2.63/log_vdsm/vdsm.log
https://94.23.2.63/log_vdsm/
https://94.23.2.63/log_engine/
The engine log looping with "Volume contains apparently corrupt bricks"-
is when engine tries to get information from gluster CLI about the
volumes and updates its database. These errors do not affect the
functioning of the storage domain and running virtual machines, but
affect the monitoring/management of the gluster volume from oVirt.
Now to identify the cause of the error - the logs indicate that the
gluster's server uuid has either not been updated/ or is different in
the engine. Could be one of these scenarios
1. Did you create the cluster with only virt service enabled and later
enable gluster service? In this case, the gluster server uuid may not be
updated. You will need to put host to maintenance and then activate it
to resolve this
2. Did you re-install the gluster server nodes after adding it to oVirt?
If this is the case, we need to investigate further how there's a mismatch.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------090203080600010300040807
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 10/12/2015 06:43 PM, Nico
wrote:<br>
</div>
<blockquote cite="mid:90c06d56781006f10ad99b5ad02fe43a@lienard.name"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<p>Le 2015-10-12 14:04, Nir Soffer a écrit :</p>
<blockquote type="cite" style="padding: 0 0.4em; border-left:
#1010ff 2px solid; margin: 0"><!-- html ignored --><!-- head
ignored --><!-- meta ignored -->
<div class="pre" style="margin: 0; padding: 0; font-family:
monospace"><br>
Yes, engine will let you use such volume in 3.5 - this is a
bug. In 3.6 you will<br>
not be able to use such setup.<br>
<br>
replica 2 fails in a very bad way when one brick is down; the<br>
application may get<br>
stale data, and this breaks sanlock. You will be get stuck
with spm<br>
that cannot be<br>
stopped and other fun stuff.<br>
<br>
You don't want to go in this direction, and we will not be
able to support that.<br>
<br>
<blockquote type="cite" style="padding: 0 0.4em; border-left:
#1010ff 2px solid; margin: 0">here the last entries of
vdsm.log</blockquote>
<br>
We need the whole file.<br>
<br>
I suggest you file an ovirt bug and attach the full vdsm log
file<br>
showing the timeframe of<br>
the error. Probably from the time you created the glusterfs
domain.<br>
<br>
Nir<br>
<br>
</div>
</blockquote>
<p> </p>
<p>Please find the full logs there:</p>
<p> </p>
<p><a moz-do-not-send="true"
href="https://94.23.2.63/log_vdsm/vdsm.log">https://94.23.2.63/log_vdsm/vdsm.log</a></p>
<p><a moz-do-not-send="true"
href="https://94.23.2.63/log_vdsm/">https://94.23.2.63/log_vdsm/</a></p>
<p><a moz-do-not-send="true"
href="https://94.23.2.63/log_engine/">https://94.23.2.63/log_engine/</a></p>
</blockquote>
<br>
The engine log looping with "Volume contains apparently corrupt
bricks"- is when engine tries to get information from gluster CLI
about the volumes and updates its database. These errors do not
affect the functioning of the storage domain and running virtual
machines, but affect the monitoring/management of the gluster volume
from oVirt.<br>
<br>
Now to identify the cause of the error - the logs indicate that the
gluster's server uuid has either not been updated/ or is different
in the engine. Could be one of these scenarios<br>
1. Did you create the cluster with only virt service enabled and
later enable gluster service? In this case, the gluster server uuid
may not be updated. You will need to put host to maintenance and
then activate it to resolve this<br>
<br>
2. Did you re-install the gluster server nodes after adding it to
oVirt? If this is the case, we need to investigate further how
there's a mismatch.<br>
<br>
<br>
<blockquote cite="mid:90c06d56781006f10ad99b5ad02fe43a@lienard.name"
type="cite">
<p> </p>
<p> </p>
<p> </p>
<div> </div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
</body>
</html>
--------------090203080600010300040807--