post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable

I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2 The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up. It reports "Up but all bricks are down" This would seem to be impossible. Gluster on the nodes itself reports no issues [root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store Gluster process Port Online Pid
------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store 49158 Y 2675 Brick gluster1:/export/brick4/vm-store 49158 Y 2309 NFS Server on localhost 2049 Y 27012 Self-heal Daemon on localhost N/A Y 27019 NFS Server on gluster0 2049 Y 12875 Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume. Any suggestions, Alastair

This is a multi-part message in MIME format. --------------000405060001020102060004 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit engine.log and vdsm.log? This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly Anyway, engine.log will give better clarity. On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store Status of volume: vm-store Gluster processPortOnlinePid ------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store49158Y2675 Brick gluster1:/export/brick4/vm-store49158Y2309 NFS Server on localhost2049Y27012 Self-heal Daemon on localhostN/AY27019 NFS Server on gluster02049Y12875 Self-heal Daemon on gluster0N/AY12882
Task Status of Volume vm-store ------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------000405060001020102060004 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> engine.log and vdsm.log?<br> <br> This can mostly happen due to following reasons<br> - "gluster volume status vm-store" is not consistently returning the right output<br> - ovirt-engine is not able to identify the bricks properly<br> <br> Anyway, engine.log will give better clarity.<br> <br> <br> <div class="moz-cite-prefix">On 05/22/2014 02:24 AM, Alastair Neil wrote:<br> </div> <blockquote cite="mid:CA+Sarwr=W=VZSLKRjVYTTy=0QGAyNBoK=CwOyMqC7g-TgripNg@mail.gmail.com" type="cite"> <div dir="ltr">I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2 <div><br> </div> <div>The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.</div> <div><br> </div> <div>It reports "Up but all bricks are down"</div> <div><br> </div> <div>This would seem to be impossible. Gluster on the nodes itself reports no issues</div> <div><br> </div> <div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[root@gluster1 ~]# gluster volume status vm-store<br> Status of volume: vm-store<br> Gluster process<span class="" style="white-space:pre"> </span>Port<span class="" style="white-space:pre"> </span>Online<span class="" style="white-space:pre"> </span>Pid<br> ------------------------------------------------------------------------------<br> Brick gluster0:/export/brick0/vm-store<span class="" style="white-space:pre"> </span>49158<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>2675<br> Brick gluster1:/export/brick4/vm-store<span class="" style="white-space:pre"> </span>49158<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>2309<br> NFS Server on localhost<span class="" style="white-space:pre"> </span>2049<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>27012<br> Self-heal Daemon on localhost<span class="" style="white-space:pre"> </span>N/A<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>27019<br> NFS Server on gluster0<span class="" style="white-space:pre"> </span>2049<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>12875<br> Self-heal Daemon on gluster0<span class="" style="white-space:pre"> </span>N/A<span class="" style="white-space:pre"> </span>Y<span class="" style="white-space:pre"> </span>12882<br> <br> Task Status of Volume vm-store<br> ------------------------------------------------------------------------------<br> There are no active volume tasks</blockquote> </div> <div><br> </div> <div><br> </div> <div>As I mentioned the vms are running happily</div> <div> initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume. </div> <div><br> </div> <div><br> </div> <div>Any suggestions, Alastair</div> <div><br> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------000405060001020102060004--

Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE. I am uncertain how to extract meaningful information from the engine.log can you provide some guidance? Thanks, Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store Gluster process Port Online Pid
------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store 49158 Y 2675 Brick gluster1:/export/brick4/vm-store 49158 Y 2309 NFS Server on localhost 2049 Y 27012 Self-heal Daemon on localhost N/A Y 27019 NFS Server on gluster0 2049 Y 12875 Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------010008050006020009010501 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi Alastair, This could be a mismatch in the hostname identified in ovirt and gluster. You could check for any exceptions from GlusterSyncJob in engine.log. Also, what version of ovirt are you using. And the compatibility version of your cluster? On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.
I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?
Thanks,
Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store Status of volume: vm-store Gluster processPortOnlinePid ------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store49158Y2675 Brick gluster1:/export/brick4/vm-store49158Y2309 NFS Server on localhost2049Y27012 Self-heal Daemon on localhostN/AY27019 NFS Server on gluster02049Y12875 Self-heal Daemon on gluster0N/AY12882
Task Status of Volume vm-store ------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------010008050006020009010501 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi Alastair,<br> <br> This could be a mismatch in the hostname identified in ovirt and gluster.<br> <br> You could check for any exceptions from GlusterSyncJob in engine.log.<br> <br> Also, what version of ovirt are you using. And the compatibility version of your cluster?<br> <br> <div class="moz-cite-prefix">On 05/28/2014 12:40 AM, Alastair Neil wrote:<br> </div> <blockquote cite="mid:CA+SarwqGVfY_CNo0M9FGxCL==bzVdqsV-EdXeqrhaYkRPAqMgg@mail.gmail.com" type="cite"> <div dir="ltr"> <div>Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.</div> <div><br> </div> <div>I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?</div> <div><br> </div> <div>Thanks, </div> <div><br> </div> <div>Alastair</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}<br> Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}}<br> Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8<br> Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731']<br> Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)</blockquote> <div><br> </div> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On 21 May 2014 23:51, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> engine.log and vdsm.log?<br> <br> This can mostly happen due to following reasons<br> - "gluster volume status vm-store" is not consistently returning the right output<br> - ovirt-engine is not able to identify the bricks properly<br> <br> Anyway, engine.log will give better clarity. <div> <div class="h5"><br> <br> <br> <div>On 05/22/2014 02:24 AM, Alastair Neil wrote:<br> </div> </div> </div> <blockquote type="cite"> <div> <div class="h5"> <div dir="ltr">I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2 <div><br> </div> <div>The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.</div> <div><br> </div> <div>It reports "Up but all bricks are down"</div> <div><br> </div> <div>This would seem to be impossible. Gluster on the nodes itself reports no issues</div> <div><br> </div> <div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[root@gluster1 ~]# gluster volume status vm-store<br> Status of volume: vm-store<br> Gluster process<span style="white-space:pre-wrap"> </span>Port<span style="white-space:pre-wrap"> </span>Online<span style="white-space:pre-wrap"> </span>Pid<br> ------------------------------------------------------------------------------<br> Brick gluster0:/export/brick0/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>2675<br> Brick gluster1:/export/brick4/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>2309<br> NFS Server on localhost<span style="white-space:pre-wrap"> </span>2049<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>27012<br> Self-heal Daemon on localhost<span style="white-space:pre-wrap"> </span>N/A<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>27019<br> NFS Server on gluster0<span style="white-space:pre-wrap"> </span>2049<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>12875<br> Self-heal Daemon on gluster0<span style="white-space:pre-wrap"> </span>N/A<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>12882<br> <br> Task Status of Volume vm-store<br> ------------------------------------------------------------------------------<br> There are no active volume tasks</blockquote> </div> <div><br> </div> <div><br> </div> <div>As I mentioned the vms are running happily</div> <div> initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume. </div> <div><br> </div> <div><br> </div> <div>Any suggestions, Alastair</div> <div><br> </div> </div> <br> <fieldset></fieldset> <br> </div> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------010008050006020009010501--

ovirt version is 3.4. I did have a slightly older version of vdsm on gluster0 but I have updated it and the issue persists. The compatibility version on the storage cluster is 3.3. I checked the logs for GlusterSyncJob notifications and there are none. On 28 May 2014 10:19, Sahina Bose <sabose@redhat.com> wrote:
Hi Alastair,
This could be a mismatch in the hostname identified in ovirt and gluster.
You could check for any exceptions from GlusterSyncJob in engine.log.
Also, what version of ovirt are you using. And the compatibility version of your cluster?
On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.
I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?
Thanks,
Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store Gluster process Port Online Pid
------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store 49158 Y 2675 Brick gluster1:/export/brick4/vm-store 49158 Y 2309 NFS Server on localhost 2049 Y 27012 Self-heal Daemon on localhost N/A Y 27019 NFS Server on gluster0 2049 Y 12875 Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I just noticed this in the console and I don't know if it is relevant. When I look at the "General" tab on the hosts under "GlusterFS Version" it shows "N/A". On 28 May 2014 11:03, Alastair Neil <ajneil.tech@gmail.com> wrote:
ovirt version is 3.4. I did have a slightly older version of vdsm on gluster0 but I have updated it and the issue persists. The compatibility version on the storage cluster is 3.3.
I checked the logs for GlusterSyncJob notifications and there are none.
On 28 May 2014 10:19, Sahina Bose <sabose@redhat.com> wrote:
Hi Alastair,
This could be a mismatch in the hostname identified in ovirt and gluster.
You could check for any exceptions from GlusterSyncJob in engine.log.
Also, what version of ovirt are you using. And the compatibility version of your cluster?
On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.
I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?
Thanks,
Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store Gluster process Port Online Pid
------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store 49158 Y 2675 Brick gluster1:/export/brick4/vm-store 49158 Y 2309 NFS Server on localhost 2049 Y 27012 Self-heal Daemon on localhost N/A Y 27019 NFS Server on gluster0 2049 Y 12875 Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------040105090909060203070305 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 05/28/2014 08:36 PM, Alastair Neil wrote:
I just noticed this in the console and I don't know if it is relevant.
When I look at the "General" tab on the hosts under "GlusterFS Version" it shows "N/A".
That's not related. The GlusterFS version in UI is populated from the getVdsCaps output from vdsm - looks like the vdsm running on your gluster node is not returning that? Could you share the engine.log so that we can look at how the gluster status was interpreted and updated ? The log from the last 10 mins should do. thanks!
On 28 May 2014 11:03, Alastair Neil <ajneil.tech@gmail.com <mailto:ajneil.tech@gmail.com>> wrote:
ovirt version is 3.4. I did have a slightly older version of vdsm on gluster0 but I have updated it and the issue persists. The compatibility version on the storage cluster is 3.3.
I checked the logs for GlusterSyncJob notifications and there are none.
On 28 May 2014 10:19, Sahina Bose <sabose@redhat.com <mailto:sabose@redhat.com>> wrote:
Hi Alastair,
This could be a mismatch in the hostname identified in ovirt and gluster.
You could check for any exceptions from GlusterSyncJob in engine.log.
Also, what version of ovirt are you using. And the compatibility version of your cluster?
On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.
I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?
Thanks,
Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store Status of volume: vm-store Gluster processPortOnlinePid ------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store49158Y2675 Brick gluster1:/export/brick4/vm-store49158Y2309 NFS Server on localhost2049Y27012 Self-heal Daemon on localhostN/AY27019 NFS Server on gluster02049Y12875 Self-heal Daemon on gluster0N/AY12882
Task Status of Volume vm-store ------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------040105090909060203070305 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <br> <div class="moz-cite-prefix">On 05/28/2014 08:36 PM, Alastair Neil wrote:<br> </div> <blockquote cite="mid:CA+SarwqHzhq2U=vU0-BsDm8Az-1a7kWQVxWuYtksmRBA6-onZw@mail.gmail.com" type="cite"> <div dir="ltr">I just noticed this in the console and I don't know if it is relevant. <div><br> </div> <div>When I look at the "General" tab on the hosts under "GlusterFS Version" it shows "N/A". <br> </div> </div> </blockquote> <br> That's not related. The GlusterFS version in UI is populated from the getVdsCaps output from vdsm - looks like the vdsm running on your gluster node is not returning that?<br> <br> Could you share the engine.log so that we can look at how the gluster status was interpreted and updated ? The log from the last 10 mins should do.<br> <br> thanks!<br> <br> <br> <blockquote cite="mid:CA+SarwqHzhq2U=vU0-BsDm8Az-1a7kWQVxWuYtksmRBA6-onZw@mail.gmail.com" type="cite"> <div dir="ltr"> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On 28 May 2014 11:03, Alastair Neil <span dir="ltr"><<a moz-do-not-send="true" href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div dir="ltr">ovirt version is 3.4. I did have a slightly older version of vdsm on gluster0 but I have updated it and the issue persists. The compatibility version on the storage cluster is 3.3. <div> <br> </div> <div>I checked the logs for GlusterSyncJob notifications and there are none.</div> <div><br> </div> <div><br> </div> <div> <div> <div><br> </div> <div><br> </div> </div> </div> </div> <div class="HOEnZb"> <div class="h5"> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On 28 May 2014 10:19, Sahina Bose <span dir="ltr"><<a moz-do-not-send="true" href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> Hi Alastair,<br> <br> This could be a mismatch in the hostname identified in ovirt and gluster.<br> <br> You could check for any exceptions from GlusterSyncJob in engine.log.<br> <br> Also, what version of ovirt are you using. And the compatibility version of your cluster? <div> <div><br> <br> <div>On 05/28/2014 12:40 AM, Alastair Neil wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div>Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.</div> <div><br> </div> <div>I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?</div> <div><br> </div> <div>Thanks, </div> <div><br> </div> <div>Alastair</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}<br> Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}}<br> Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8<br> Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731']<br> Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)</blockquote> <div><br> </div> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On 21 May 2014 23:51, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> engine.log and vdsm.log?<br> <br> This can mostly happen due to following reasons<br> - "gluster volume status vm-store" is not consistently returning the right output<br> - ovirt-engine is not able to identify the bricks properly<br> <br> Anyway, engine.log will give better clarity. <div> <div><br> <br> <br> <div>On 05/22/2014 02:24 AM, Alastair Neil wrote:<br> </div> </div> </div> <blockquote type="cite"> <div> <div> <div dir="ltr">I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2 <div><br> </div> <div>The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.</div> <div><br> </div> <div>It reports "Up but all bricks are down"</div> <div><br> </div> <div>This would seem to be impossible. Gluster on the nodes itself reports no issues</div> <div><br> </div> <div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[root@gluster1 ~]# gluster volume status vm-store<br> Status of volume: vm-store<br> Gluster process<span style="white-space:pre-wrap"> </span>Port<span style="white-space:pre-wrap"> </span>Online<span style="white-space:pre-wrap"> </span>Pid<br> ------------------------------------------------------------------------------<br> Brick gluster0:/export/brick0/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>2675<br> Brick gluster1:/export/brick4/vm-store<span style="white-space:pre-wrap"> </span>49158<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>2309<br> NFS Server on localhost<span style="white-space:pre-wrap"> </span>2049<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>27012<br> Self-heal Daemon on localhost<span style="white-space:pre-wrap"> </span>N/A<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>27019<br> NFS Server on gluster0<span style="white-space:pre-wrap"> </span>2049<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>12875<br> Self-heal Daemon on gluster0<span style="white-space:pre-wrap"> </span>N/A<span style="white-space:pre-wrap"> </span>Y<span style="white-space:pre-wrap"> </span>12882<br> <br> Task Status of Volume vm-store<br> ------------------------------------------------------------------------------<br> There are no active volume tasks</blockquote> </div> <div><br> </div> <div><br> </div> <div>As I mentioned the vms are running happily</div> <div> initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume. </div> <div><br> </div> <div><br> </div> <div>Any suggestions, Alastair</div> <div><br> </div> </div> <br> <fieldset></fieldset> <br> </div> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> <br> <fieldset></fieldset> <br> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </div> </div> <br> _______________________________________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------040105090909060203070305--

ping here is the engine log - copying list this time too. On 28 May 2014 12:17, Sahina Bose <sabose@redhat.com> wrote:
On 05/28/2014 08:36 PM, Alastair Neil wrote:
I just noticed this in the console and I don't know if it is relevant.
When I look at the "General" tab on the hosts under "GlusterFS Version" it shows "N/A".
That's not related. The GlusterFS version in UI is populated from the getVdsCaps output from vdsm - looks like the vdsm running on your gluster node is not returning that?
Could you share the engine.log so that we can look at how the gluster status was interpreted and updated ? The log from the last 10 mins should do.
thanks!
On 28 May 2014 11:03, Alastair Neil <ajneil.tech@gmail.com> wrote:
ovirt version is 3.4. I did have a slightly older version of vdsm on gluster0 but I have updated it and the issue persists. The compatibility version on the storage cluster is 3.3.
I checked the logs for GlusterSyncJob notifications and there are none.
On 28 May 2014 10:19, Sahina Bose <sabose@redhat.com> wrote:
Hi Alastair,
This could be a mismatch in the hostname identified in ovirt and gluster.
You could check for any exceptions from GlusterSyncJob in engine.log.
Also, what version of ovirt are you using. And the compatibility version of your cluster?
On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the vdsm log grepping for the volume name vm-store. It seems to indicate the bricks are ONLINE.
I am uncertain how to extract meaningful information from the engine.log can you provide some guidance?
Thanks,
Alastair
Thread-100::DEBUG::2014-05-27 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {} Thread-100::DEBUG::2014-05-27 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status': {'message': 'Done', 'code': 0}} Thread-16::DEBUG::2014-05-27 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8 Thread-16::DEBUG::2014-05-27 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM', 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active', 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7', 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7', 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master', 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731'] Thread-16::DEBUG::2014-05-27 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd None) Thread-16::DEBUG::2014-05-27 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
On 21 May 2014 23:51, Kanagaraj <kmayilsa@redhat.com> wrote:
engine.log and vdsm.log?
This can mostly happen due to following reasons - "gluster volume status vm-store" is not consistently returning the right output - ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the latest 3.5 bits. This all seems to have gone smoothly and all the volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store Gluster process Port Online Pid
------------------------------------------------------------------------------ Brick gluster0:/export/brick0/vm-store 49158 Y 2675 Brick gluster1:/export/brick4/vm-store 49158 Y 2309 NFS Server on localhost 2049 Y 27012 Self-heal Daemon on localhost N/A Y 27019 NFS Server on gluster0 2049 Y 12875 Self-heal Daemon on gluster0 N/A Y 12882
Task Status of Volume vm-store
------------------------------------------------------------------------------ There are no active volume tasks
As I mentioned the vms are running happily initially the ISOs volume had the same issue. I did a volume start and stop on the volume as it was not being activly used and that cleared up the issue in the console. However, as I have VMs running I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Alastair Neil
-
Kanagaraj
-
Sahina Bose