This is a multi-part message in MIME format.
--------------000405060001020102060004
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
engine.log and vdsm.log?
This can mostly happen due to following reasons
- "gluster volume status vm-store" is not consistently returning the
right output
- ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the
latest 3.5 bits. This all seems to have gone smoothly and all the
volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the
vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself
reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store
Gluster processPortOnlinePid
------------------------------------------------------------------------------
Brick gluster0:/export/brick0/vm-store49158Y2675
Brick gluster1:/export/brick4/vm-store49158Y2309
NFS Server on localhost2049Y27012
Self-heal Daemon on localhostN/AY27019
NFS Server on gluster02049Y12875
Self-heal Daemon on gluster0N/AY12882
Task Status of Volume vm-store
------------------------------------------------------------------------------
There are no active volume tasks
As I mentioned the vms are running happily
initially the ISOs volume had the same issue. I did a volume start
and stop on the volume as it was not being activly used and that
cleared up the issue in the console. However, as I have VMs running
I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------000405060001020102060004
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
engine.log and vdsm.log?<br>
<br>
This can mostly happen due to following reasons<br>
- "gluster volume status vm-store" is not consistently returning the
right output<br>
- ovirt-engine is not able to identify the bricks properly<br>
<br>
Anyway, engine.log will give better clarity.<br>
<br>
<br>
<div class="moz-cite-prefix">On 05/22/2014 02:24 AM, Alastair Neil
wrote:<br>
</div>
<blockquote
cite="mid:CA+Sarwr=W=VZSLKRjVYTTy=0QGAyNBoK=CwOyMqC7g-TgripNg@mail.gmail.com"
type="cite">
<div dir="ltr">I just did a rolling upgrade of my gluster storage
cluster to the latest 3.5 bits. This all seems to have gone
smoothly and all the volumes are on line. All volumes are
replicated 1x2
<div><br>
</div>
<div>The ovirt console now insists that two of my volumes ,
including the vm-store volume with my vm's happily running
have no bricks up.</div>
<div><br>
</div>
<div>It reports "Up but all bricks are down"</div>
<div><br>
</div>
<div>This would seem to be impossible. Gluster on the
nodes
itself reports no issues</div>
<div><br>
</div>
<div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[root@gluster1
~]# gluster volume status vm-store<br>
Status of volume: vm-store<br>
Gluster process<span class=""
style="white-space:pre"> </span>Port<span
class="" style="white-space:pre">
</span>Online<span
class="" style="white-space:pre">
</span>Pid<br>
------------------------------------------------------------------------------<br>
Brick gluster0:/export/brick0/vm-store<span class=""
style="white-space:pre"> </span>49158<span
class=""
style="white-space:pre"> </span>Y<span
class=""
style="white-space:pre"> </span>2675<br>
Brick gluster1:/export/brick4/vm-store<span class=""
style="white-space:pre"> </span>49158<span
class=""
style="white-space:pre"> </span>Y<span
class=""
style="white-space:pre"> </span>2309<br>
NFS Server on localhost<span class=""
style="white-space:pre"> </span>2049<span
class=""
style="white-space:pre"> </span>Y<span
class=""
style="white-space:pre"> </span>27012<br>
Self-heal Daemon on localhost<span class=""
style="white-space:pre"> </span>N/A<span
class=""
style="white-space:pre"> </span>Y<span
class=""
style="white-space:pre"> </span>27019<br>
NFS Server on gluster0<span class=""
style="white-space:pre">
</span>2049<span class=""
style="white-space:pre"> </span>Y<span
class="" style="white-space:pre">
</span>12875<br>
Self-heal Daemon on gluster0<span class=""
style="white-space:pre"> </span>N/A<span
class=""
style="white-space:pre"> </span>Y<span
class=""
style="white-space:pre"> </span>12882<br>
<br>
Task Status of Volume vm-store<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks</blockquote>
</div>
<div><br>
</div>
<div><br>
</div>
<div>As I mentioned the vms are running happily</div>
<div>
initially the ISOs volume had the same issue. I did a volume
start and stop on the volume as it was not being activly used
and that cleared up the issue in the console. However, as I
have VMs running I can't so this for the vm-store volume.
</div>
<div><br>
</div>
<div><br>
</div>
<div>Any suggestions, Alastair</div>
<div><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
</body>
</html>
--------------000405060001020102060004--