Users
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 19176 discussions
HI
I have created a user with the "user role permissions" when logged in able
to view all the vms ,by default this should not happen ,is there any
solution to resolve this ?
Thanks,
Nagaraju
3
5
--Sig_/d8.0dmwks3vvUk8CoFxP3Dz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Hi,
I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single
server. I'd like to move away from having a single point of failure.
Watching the mailing list, all the issues with gluster getting out of sync
and replica issues has me nervous about gluster, plus I just have 2
machines with lots of drive bays for storage. I've been reading about GFS2
and DRBD, and wanted opinions on if either is a good/bad idea, or to see if
there are other alternatives.
My oVirt setup is currently 5 nodes and about 25 VMs, might double in size
eventually, but probably won't get much bigger than that.
Thanks,
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/d8.0dmwks3vvUk8CoFxP3Dz
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJV/HVCAAoJEMHFVuy5l8Y4ZEEQAJ8uBu+t+q+aYj0FLxfT7sH1
mOEKbRveJ/IZRTm9f1OAw1FOfaeiMGmpZIaSMX/0yl7bvMgeQF35qRkDSlCh2ky/
6jhktJYpTN93xUppJgK1yOuUok/WR6r4wSupOhec320fa9YHiEbDDP0FjQ5E1RSX
nvmPmEwU3hQzj0frXf9lNcn8PWRPdt/Z/Gb92oTT8omRk1veHW+Dq42/yYHGMhpB
nZ/ecywDhAdVsKGmfP3ZB9QBCaHvdhcoNodNHJnB+YXgbzjhkUKpxzO8l9PDwVTc
USZVgkia84QpfTC3vdccEZj0zlLg73Z2zGk1gIpL0gB4dXSz+Y8VCbAmSkgKOAah
qT1yWOFviafgGtDQIvsmdIhYYZZjJeQ0M3j2VznIDENVwYLnLOE0B7NMKw35JkuB
u1bFzT80YqhooYsV63tDMtvN7HH/uM0+3v8WqtuEDlEGvrO8w42/52qoNR77329k
Elq+XdQENXG1oO+aPhW79WeuUPpK3nsBs3e60P+wmGrOBG6K1agEjjefOsmFtPjo
72QC/1tn+5sSlyeBLmIaGQ8ojhZV236V7zd9kW0dVYcc2Yepr3BJnABPy1gDyTfe
QFAugye5AlS3msC8Oo+G3dQ4UvotLNZAn9y+WWbTvErf35jlf/UpqJvyDx+wmrsP
nAZz+PiKkirYVoH2OkEV
=iF2V
-----END PGP SIGNATURE-----
--Sig_/d8.0dmwks3vvUk8CoFxP3Dz--
7
7
Hello,
I am able to connect to SPICE that is behind a NAT (via Squid), but am
wondering if there is a way to do it with VNC as the console option?
I thought I had this working at one point, several installs ago, but it
doesn't seem to be the case currently. When I do try to connect to a VM
configured to use VNC as the console option, the remote-viewer screen
opens, says "Connecting to graphic server", then after about 30 seconds
or so says it is unable to connect to the graphic server.
Thanks! :-)
Regards,
Alan
3
8
Hi all,
I wonder how do i install the self-hosted engine on an iscsi non routed
network?
I want to have one interface for the management traffic (routed) and one
interface for the iscsi traffic?
Thanks!
Christian
1
0
Is there any way to physically turn servers off in the cluster policy.
IE. I have ten nodes, migrate and turn off under utilized servers
Thanks
Donny D
2
1
HI
While mounting the EXport Domain to the different engine getting below
error,
Error while executing action New NFS Storage Domain: Error in creating a
Storage Domain. The selected storage path is not empty (probably contains
another Storage Domain). Either remove the existing Storage Domain from
this path, or change the Storage path).
-Nagaraju
1
0
Hi all,
we are trying to setup an ovirt environment with two hosts, both
connected to a ISCSI storage device, a hosted engine and power
management configured over ILO. So far it seems to work fine in our
testing setup and starting/stopping VMs works smoothly with proper
scheduling between those hosts. So we wanted to test HA for the VMs now
and started to manually shutdown a host while there are still VMs
running on that machine (to simulate power failure or a kernel panic).
The expected outcome was that all machines were HA is enabled, are
booted again. This works if the machine with the failure does not have
the engine running. If the machine with the hosted engine VM gets
shutdown, the host gets in the "Not Responsive state" and all VMs end up
in an unkown state. However, the engine itself starts correctly on the
second host and it seems like it tries to fence the other host (as
expected) - Events which we get in the open virtualization manager:
1. Host hosted_engine_2 is non responsive
2. Host hosted_engine_1 from cluster Default was chosen as a proxy to
execute Status command on Host hosted_engine_2.
3. Host hosted_engine_2 became non responsive. It has no power
management configured. Please check the host status, manually reboot it,
and click "Confirm Host Has Been Rebooted"
4. Host hosted_engine_2 is not responding. It will stay in Connecting
state for a grace period of 124 seconds and after that an attempt to
fence the host will be issued.
Event 4 is continuously coming every 3 minutes. Complete engine.log file
during engine boot up: http://pastebin.com/D6xS3Wfy
So the host detects the machine is not responding and wants to fence it.
But although the host has power management configured over ILO, the
engine thinks that it is not. As a result the second host does not get
fenced and VMs are not migrated to the running machine.
In the log files there are also a lot of time out exception. But I guess
that this is because the host cannot connect to the other machine.
Did anybody face similar problems with HA? Or any clue what the problem
might be?
Thanks,
Michael
----
ovirt version: 3.5.4
Hosted engine VM OS: Cent OS 6.5
Host Machines OS: Cent OS 7
P.S. We also have to note that we had problems with the command
fence_ipmilan at the beginning. We were receiving the message "Unable to
obtain correct plug status or plug is not available," whenever the
command fence_ipmilan was called. However, the command fence_ilo4
worked. So we use a simple script for fence_ipmilan now that calls
fence_ilo4 and passes the arguments.
4
12
This is a multipart message in MIME format.
------=_NextPart_000_00D6_01D0F555.6C3D0CF0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
I've searched on this issue all over the place and not found a full answer.
I've got a lab with older Dell PE860 servers with Pentium D processors. We
are attempting to prove out that we can move from VMware to KVM and oVirt,
but with 3.5 we can't get these servers to enter into a cluster due to the
age of the processors.
I've seen it referenced in a few list entries to modify 'the database' to
put CPUInfo entries in for the processor capabilities of this processor, but
it is unclear to me and I've been unable to find any documentation that
speaks to what database and how to modify it safely.
I know we could update these servers and get better power consumption, heat
generation, etc., but it's not in the cards and I can't move my newer
servers until we prove it out with these older ones, which ran ESXi 5.5 with
absolutely no issue. This cluster will be a functional test and small tools
cluster ultimately, so I don't really care about their lower end specs,
either.
Would installing something older than 3.5 help me here? Or is there some
doc somewhere I've just not found that describes this process of updating
the DB with these CPU parameters.
TIA!!
------=_NextPart_000_00D6_01D0F555.6C3D0CF0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>I’ve searched on this issue all over the place =
and not found a full answer.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I’ve =
got a lab with older Dell PE860 servers with Pentium D processors. =
We are attempting to prove out that we can move from VMware to KVM and =
oVirt, but with 3.5 we can’t get these servers to enter into a =
cluster due to the age of the processors.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I’ve =
seen it referenced in a few list entries to modify ‘the =
database’ to put CPUInfo entries in for the processor capabilities =
of this processor, but it is unclear to me and I’ve been unable to =
find any documentation that speaks to what database and how to modify it =
safely.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>I know we could update these servers and get better =
power consumption, heat generation, etc., but it’s not in the =
cards and I can’t move my newer servers until we prove it out with =
these older ones, which ran ESXi 5.5 with absolutely no issue. =
This cluster will be a functional test and small tools cluster =
ultimately, so I don’t really care about their lower end specs, =
either.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Would installing something older than 3.5 help me =
here? Or is there some doc somewhere I’ve just not found =
that describes this process of updating the DB with these CPU =
parameters.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>TIA!!<o:p></o:p></p></div></body></html>
------=_NextPart_000_00D6_01D0F555.6C3D0CF0--
2
2
25 Sep '15
Hi again,
I'm trying to write a systemd-script (for CentOS7) in order to automatically put the host in "maintenance" on shutdown and to activate it after boot.
I wrote a python-script that to that and it works so that I can start it and see the host in "maintenance" and having all VMs migrated.
Unfortunately I can't call this script on shutdown/reboot and wait until all VMs are migrated and the host is in maintenance.
Here my script:
[Unit]
Description=oVirt interface for managing host
After=remote-fs.target vdsmd.service multipathd.service libvirtd.service time-sync.target iscsid.service rpcbind.service supervdsmd.service sanlock.service vdsm-network.service
Wants=remote-fs.target vdsmd.service multipathd.service libvirtd.service time-sync.target iscsid.service rpcbind.service supervdsmd.service sanlock.service vdsm-network.service
[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/local/bin/ovirt-maintenance.sh active
ExecStop=/usr/local/bin/ovirt-maintenance.sh maintenance
KillMode=none
[Install]
WantedBy=multi-user.target
Could someone help me and say what I'm doing wrong?
Thanks a lot
Mit freundlichen Grüßen
Luca Bertoncello
--
Besuchen Sie unsere Webauftritte:
www.queo.biz Agentur für Markenführung und Kommunikation
www.queoflow.com IT-Consulting und Individualsoftwareentwicklung
Luca Bertoncello
Administrator
Telefon: +49 351 21 30 38 0
Fax: +49 351 21 30 38 99
E-Mail: l.bertoncello(a)queo-group.com
queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
3
2
------=_Part_674_1103437547.1442854627811
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I have a virtual mail security appliance that I am trying to import into oVirt 3.5.4. The appliance was built for kvm. it has a total of 5 scsi disks. I can convert and copy the OS disk only because it expands its self to full size.
The first problem that I have is that the disks expand to their full size when I convert the to an oVirt format
OS Disk
mail.qcow2 (74M) converts to main.img (294M)
Storage disks
250.qcow2 (256K) converts to 250.img (250GB)
1024.qcow2 (256K) converts to 1024.img (1TB)
2048.qcow2 (256K) converts to 2048.img (2TB)
4096.qvow2 (256K) converts to 4096.img (4TB)
8192.qcow2 (256K) converts to 8192.img (8TB)
The second problem is that these disks are scsi and ti does not seem to work using the virtio-scsi selection. I tried selecting the IDE option, but there is a limit to the number of IDE disks that I can use.
Virtualbox has no issues running the appliance that was distributed in the ova format. Any help would be appreciated
------=_Part_674_1103437547.1442854627811
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>I have a virtual mail security a=
ppliance that I am trying to import into oVirt 3.5.4. The appliance was bui=
lt for kvm. it has a total of 5 scsi disks. I can convert and copy the OS d=
isk only because it expands its self to full size. <br></div><div><br></div=
><div>The first problem that I have is that the disks expand to their full =
size when I convert the to an oVirt format<br></div><div><br></div><div>OS =
Disk<br></div><div>mail.qcow2 (74M) converts to main.img (294M)<br></div><d=
iv><br></div><div><br></div><div>Storage disks<br></div><div>250.qcow2 (256=
K) converts to 250.img (250GB)<br></div><div>1024.qcow2 (256K) conver=
ts to 1024.img (1TB)<br></div><div>2048.qcow2 (256K) converts to 2048.img (=
2TB)<br></div><div>4096.qvow2 (256K) converts to 4096.img (4TB)<br></div><d=
iv>8192.qcow2 (256K) converts to 8192.img (8TB)<br></div><div><span name=3D=
"x"></span><br style=3D"font-family: 'trebuchet ms', sans-serif;" data-mce-=
style=3D"font-family: 'trebuchet ms', sans-serif;"></div><div><span name=3D=
"x"><br></span></div><div><span name=3D"x"><br></span></div><div><span name=
=3D"x">The second problem is that these disks are scsi and ti does not seem=
to work using the virtio-scsi selection. I tried selecting the IDE option,=
but there is a limit to the number of IDE disks that I can use.<br></span>=
</div><div><span name=3D"x"><br></span></div><div><span name=3D"x">Virtualb=
ox has no issues running the appliance that was distributed in the ova form=
at. Any help would be appreciated<br></span></div><div><span name=3D"x"><br=
></span></div></div></body></html>
------=_Part_674_1103437547.1442854627811--
3
9
24 Sep '15
On Wed, Sep 23, 2015 at 12:40 PM, Ian Fraser <Ian.Fraser(a)asm.org.uk> wrote:
> Hi Nir,
>
> I have changed the v2v.py file as per your request, it has now changed the
> behaviour. The popup window still hangs and I get the following two events:
>
> Failed to retrieve VMs information from external server
> vpx://username%40domain@vcenter.server
> /datacentre_name/hostname?no_verify=1
>
> VDSM ovirt-host-02 command failed: internal error: Invalid or not yet
> handled value 'emptyBackingString' for VMX entry 'ide1:0.fileName' for
> device type 'cdrom-image'
>
Fixing the first error, we see now the real error; libvirt cannot handle
this vm configuration. We will ask one of the libvirt guys to look into
this.
You may try to disable the cdrom device on that vm, which is probably
useless now.
>
> I have attached the vdsm.log to this email, should I also attach to the BZ
> I opened?
Yes, please attach it.
(Adding back users(a)ovirt.org, since this thread may help others with same
issue.)
>
> Many thanks
>
> Ian
>
> From: Nir Soffer [mailto:nsoffer@redhat.com]
> Sent: 23 September 2015 00:04
> To: Ian Fraser <Ian.Fraser(a)asm.org.uk>; Shahar Havivi <shavivi(a)redhat.com>
> Cc: users(a)ovirt.org
> Subject: Re: [ovirt-users] vmware import hangs after click load button on
> 3.5 rc5
>
> Hi Ian,
>
> Your import failed because looking up some disk failed. Unfortunately, we
> don't have enough information in the log abut this failure.
>
> Because of incorrect error handling, this error failed the entire request,
> failing your import.
>
> Patch [1] fixes the second problem. If is possible that with this patch
> listing the external vms will work and you will be able to import the vm,
> but it is also possible that the first error was significant and will fail
> the import later.
>
> It would be useful if you test this patch and report if it works for you.
>
> Would you open an ovirt bug for this issue, attaching the vdsm log?
>
> [1] https://gerrit.ovirt.org/46540
>
> Nir
>
> On Tue, Sep 22, 2015 at 9:09 AM, Ian Fraser <Ian.Fraser(a)asm.org.uk> wrote:
> Thanks Nir,
>
> File attached.
>
> From: Nir Soffer [mailto:nsoffer@redhat.com]
> Sent: 21 September 2015 23:16
> To: Ian Fraser <Ian.Fraser(a)asm.org.uk>
> Cc: users(a)ovirt.org
> Subject: Re: [ovirt-users] vmware import hangs after click load button on
> 3.5 rc5
>
> On Tue, Sep 22, 2015 at 12:14 AM, Ian Fraser <Ian.Fraser(a)asm.org.uk>
> wrote:
> I did get a “VDSM <hostname> command failed: local variable 'capacity'
> referenced before assignment” error in the events I have just noticed, does
> that shed any more light?
>
> This shed some light. Can you share the vdsm.log containing this error?
>
> Look in /var/log/vdsm/vdsm.log*
>
> ______________________________________________________________________
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> ______________________________________________________________________
>
> ________________________________________
>
> The information in this message and any attachment is intended for the
> addressee and is confidential. If you are not that addressee, no action
> should be taken in reliance on the information and you should please reply
> to this message immediately to inform us of incorrect receipt and destroy
> this message and any attachments.
>
> For the purposes of internet level email security incoming and outgoing
> emails may be read by personnel other than the named recipient or sender.
>
> Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that
> emails and attachments are virus free or compatible with your systems. You
> should make your own checks and ASM (UK) Ltd does not accept liability in
> respect of viruses or computer problems experienced.
> Registered address: Agency Sector Management (UK) Ltd. Ashford House,
> 41-45 Church Road, Ashford, Middlesex, TW15 2TQ
> Registered in England No.2053849
>
> ______________________________________________________________________
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> ______________________________________________________________________
>
>
> ______________________________________________________________________
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> ______________________________________________________________________
>
> ________________________________
>
> The information in this message and any attachment is intended for the
> addressee and is confidential. If you are not that addressee, no action
> should be taken in reliance on the information and you should please reply
> to this message immediately to inform us of incorrect receipt and destroy
> this message and any attachments.
>
> For the purposes of internet level email security incoming and outgoing
> emails may be read by personnel other than the named recipient or sender.
>
> Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that
> emails and attachments are virus free or compatible with your systems. You
> should make your own checks and ASM (UK) Ltd does not accept liability in
> respect of viruses or computer problems experienced.
> Registered address: Agency Sector Management (UK) Ltd. Ashford House,
> 41-45 Church Road, Ashford, Middlesex, TW15 2TQ
> Registered in England No.2053849
>
> ______________________________________________________________________
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> ______________________________________________________________________
>
2
5
This is a multi-part message in MIME format.
--------------020202090206090904010807
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Ovirt users,
I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs)
cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs:
readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
And nothing server side.
I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64
and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64
But since it is a production system, i'm not sure about a performing
gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage
has been mounted).
I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.
If anyone have a hint to this problem
thanks
Jean-Michel
--------------020202090206090904010807
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-unicode">Hi Ovirt users,
<br>
<br>
I'm running ovirt hosted 3.4 with gluster data storage.
<br>
When I add a new host (Centos 6.6) the data storage (as a
glsuterfs) cannot be mount.
<br>
I have the following errors in gluster client log file :
<br>
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
<br>
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv]
0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
<br>
[2015-09-24 12:27:22.637307] E
[rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS
Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344
(xid=0x1)
<br>
[2015-09-24 12:27:22.637333] E
[glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:/data)
<br>
[2015-09-24 12:27:22.637360] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e)
[0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2)
[0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65)
[0x4059b5] ) 0-: received signum (0), shutting down
<br>
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
<br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down
<br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down
<br>
And nothing server side.
<br>
<br>
I suppose it is a version issue since on server side I have
<br>
glusterfs-api-3.6.3-1.el6.x86_64
<br>
glusterfs-fuse-3.6.3-1.el6.x86_64
<br>
glusterfs-libs-3.6.3-1.el6.x86_64
<br>
glusterfs-3.6.3-1.el6.x86_64
<br>
glusterfs-cli-3.6.3-1.el6.x86_64
<br>
glusterfs-rdma-3.6.3-1.el6.x86_64
<br>
glusterfs-server-3.6.3-1.el6.x86_64
<br>
<br>
and on the new host :
<br>
glusterfs-3.7.4-2.el6.x86_64
<br>
glusterfs-api-3.7.4-2.el6.x86_64
<br>
glusterfs-libs-3.7.4-2.el6.x86_64
<br>
glusterfs-fuse-3.7.4-2.el6.x86_64
<br>
glusterfs-cli-3.7.4-2.el6.x86_64
<br>
glusterfs-server-3.7.4-2.el6.x86_64
<br>
glusterfs-client-xlators-3.7.4-2.el6.x86_64
<br>
glusterfs-rdma-3.7.4-2.el6.x86_64
<br>
<br>
But since it is a production system, i'm not sure about a
performing gluster server upgrade.
<br>
Mounting a gluster volume as NFS is possible (the engine data
storage has been mounted).
<br>
<br>
I'm asking here because glusterfs comes from the ovirt3.4 rpm
repository.
<br>
<br>
If anyone have a hint to this problem
<br>
<br>
thanks
<br>
Jean-Michel
<br>
<br>
</div>
</body>
</html>
--------------020202090206090904010807--
1
0
Hi Ovirt users,
I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs)
cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs:
readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
And nothing server side.
I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64
and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64
But since it is a production system, i'm not sure about a performing
gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage
has been mounted).
I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.
If anyone have a hint to this problem
thanks
Jean-Michel
1
0
[ANN] oVirt 3.5.5 Second Release Candidate is now available for testing
by Sandro Bonazzola 24 Sep '15
by Sandro Bonazzola 24 Sep '15
24 Sep '15
The oVirt Project is pleased to announce the availability
of the Second Release Candidate of oVirt 3.5.5 for testing, as of September
24th, 2015.
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar),
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar) and #Fedora 21.
This release of oVirt 3.5.5 includes new DWH and reports packages.
See the release notes [1] for an initial list of bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Node ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
Please test add yourself to the test page[4] if you're testing this release.
[1] http://www.ovirt.org/OVirt_3.5.5_Release_Notes
[2] http://plain.resources.ovirt.org/pub/ovirt-3.5-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
[4] http://www.ovirt.org/Testing/oVirt_3.5.5_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hi,
last Sunday I experienced a power outage with one of my two oVirt
hypervisors. After power was restored I experienced some weirdness:
- on one of the VMs running on this hypervisor the boot disk changed, so
it was no longer able to boot. Looking at the console the VM would hang
on "Booting from hard disk". After I noticed that the wrong virtual disk
was marked as OS/bootable I got it booting again after correcting it to
the proper boot disk. This was done from the oVirt management server.
- on another VM I tried today to add another virtual disk to expand a
LVM volume. In dmesg I can see the new device:
[17167560.005768] vdc: unknown partition table
However, when I tried to run pvcreate I got an error message saying that
this was already marked as an LVM disk, and then running pvs give me the
following error:
# pvs
Couldn't find device with uuid 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7.
PV VG Fmt Attr PSize PFree
/dev/vda5 rit-kvm-ssweb02 lvm2 a-- 59.76g 0
/dev/vdb1 vg_syncshare lvm2 a-- 500.00g 0
/dev/vdc1 VG_SYNCSHARE01 lvm2 a-- 400.00g 0
unknown device VG_SYNCSHARE01 lvm2 a-m 1024.00g 0
As you can see there's already a PV called /dev/vdc1, as well another
one named "unknown device". These two PVs belong to a VG that does NOT
belong to this VM, VG_SYNCSHARE01. The uuids for these two PVs are:
--- Physical volume ---
PV Name unknown device
VG Name VG_SYNCSHARE01
PV Size 1024.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 262143
Free PE 0
Allocated PE 262143
PV UUID 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7
--- Physical volume ---
PV Name /dev/vdc1
VG Name VG_SYNCSHARE01
PV Size 400.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 102399
Free PE 0
Allocated PE 102399
PV UUID oKSDoo-3pxU-0uue-zQ0H-kv1N-lyPa-P2M2FY
The two PVs which doesn't belong on this VM actually belongs to a
totally different VM.
On VM number two:
# pvdisplay
--- Physical volume ---
PV Name /dev/vdb1
VG Name VG_SYNCSHARE01
PV Size 1024.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 262143
Free PE 0
Allocated PE 262143
PV UUID 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7
--- Physical volume ---
PV Name /dev/vdd1
VG Name VG_SYNCSHARE01
PV Size 400.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 102399
Free PE 0
Allocated PE 102399
PV UUID oKSDoo-3pxU-0uue-zQ0H-kv1N-lyPa-P2M2FY
As you can see, same uuid and VG name, but two different VMs.
My setup:
oVirt manager: oVirt 3.5 running on CentOS 6.7
oVirt hypervisors: two oVirt 3.5 servers running on CentOS 6.7
During the time of the power outage mentioned earlier I was running
oVirt 3.4, but I upgraded today and rebooted the manager and both
hypervisors, but NOT the VMs.
Virtual machines:
Debian wheezy 7.9 x86_64
Storage:
HP LeftHand iSCSI
I have tried to locate error messages in the logs which can be related
to this behaviour, but so far no luck :(
--
Morten A. Middelthon
Email: morten(a)flipp.net
1
0
Hi
can you pls provide me the tools to backup and restore the vm images in
Ovirt?
Thanks,
Nagaraju
2
1
Not able to resume a VM which was paused because of gluster quorum issue
by Ramesh Nachimuthu 24 Sep '15
by Ramesh Nachimuthu 24 Sep '15
24 Sep '15
This is a multi-part message in MIME format.
--------------050502080707090302020409
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I am not able to resume a VM which was paused because of gluster
client quorum issue. Here is what happened in my setup.
1. Created a gluster storage domain which is backed by gluster volume
with replica 3.
2. Killed one brick process. So only two bricks are running in replica 3
setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount and VMs
moved to paused state.
" server 10.70.45.17:49217 has not responded in the last 42
seconds, disconnecting."
"vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6:
Failing WRITE as quorum is not met"
more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able to write the
the gluster file system.
7. When I try to resume the VM it doesn't work and I got following error
in vdsm log.
http://pastebin.com/aXiamY15
Regards,
Ramesh
--------------050502080707090302020409
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi,<br>
<br>
I am not able to resume a VM which was paused because of gluster
client quorum issue. Here is what happened in my setup. <br>
<br>
1. Created a gluster storage domain which is backed by gluster
volume with replica 3. <br>
2. Killed one brick process. So only two bricks are running in
replica 3 setup.<br>
3. Created two VMs<br>
4. Started some IO using fio on both of the VMs<br>
5. After some time got the following error in gluster mount and VMs
moved to paused state.<br>
"
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<span style="color: rgb(51, 51, 51); font-family: monospace;
font-size: 11px; font-style: normal; font-variant: normal;
font-weight: normal; letter-spacing: normal; line-height: 13.2px;
orphans: auto; text-align: left; text-indent: 0px; text-transform:
none; white-space: normal; widows: 1; word-spacing: 0px;
-webkit-text-stroke-width: 0px; display: inline !important; float:
none; background-color: rgb(255, 255, 255);">server
10.70.45.17:49217 has not responded in the last 42 seconds,
disconnecting."<br>
"</span><span style="color: rgb(51, 51, 51); font-family:
monospace; font-size: 11px; font-style: normal; font-variant:
normal; font-weight: normal; letter-spacing: normal; line-height:
13.2px; orphans: auto; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 1;
word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline
!important; float: none; background-color: rgb(255, 255, 255);"><span
style="color: rgb(51, 51, 51); font-family: monospace;
font-size: 11px; font-style: normal; font-variant: normal;
font-weight: normal; letter-spacing: normal; line-height:
13.2px; orphans: auto; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 1;
word-spacing: 0px; -webkit-text-stroke-width: 0px; display:
inline !important; float: none; background-color: rgb(255, 255,
255);">vmstore-replicate-0:
e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing WRITE as quorum is
not met</span>"<br>
more gluster mount logs at <a class="moz-txt-link-freetext" href="http://pastebin.com/UmiUQq0F">http://pastebin.com/UmiUQq0F</a><br>
</span>6. After some time gluster quorum is active and I am able to
write the the gluster file system.<br>
7. When I try to resume the VM it doesn't work and I got following
error in vdsm log.<br>
<a class="moz-txt-link-freetext" href="http://pastebin.com/aXiamY15">http://pastebin.com/aXiamY15</a><br>
<br>
<br>
Regards,<br>
Ramesh<br>
<br>
</body>
</html>
--------------050502080707090302020409--
4
10
HI All,
Can someone help me in configuring LDAP authentication for Ovirt ?
Thanks,,
Nagaraju
4
22
I'm exporting a VM as part of testing backing up VMs and this 75GB VM has
been exporting for 3+ hours. The storage is running gluster on 10GbE so
bandwidth isn't the issue. Importing these VMs from the same export took
roughly 10 minutes. But I'm not seeing any errors. The only thing in the
engine.log is:
/var/log/ovirt-engine/engine.log:2015-09-21 13:16:14,459 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-6) [3ab846b4] Correlation ID: 1db67021, Job
ID: b348e823-fb6a-42b8-8561-8ab6047a27c7, Call Stack: null, Custom Event
ID: -1, Message: Starting export Vm asdf to export
Is there a way to kill the export?
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
3
4
------=_Part_789470_851827211.1442587518543
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Is there any documentation about FreeIPA integration with oVirt 3.5 and how to configure it?
Thanks
Jose
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_789470_851827211.1442587518543
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: Times New Roman; font-size: 10pt; color: #000000"><div>Hi,<br></div><div><br></div><div>Is there any documentation about FreeIPA integration with oVirt 3.5 and how to configure it?<br></div><div><br></div><div>Thanks<br></div><div><br></div><div>Jose<br></div><div><br></div><div>-- <br></div><div><span name="x"></span><hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_789470_851827211.1442587518543--
4
12
23 Sep '15
On Wed, Sep 23, 2015 at 12:14:02PM -0400, Douglas Schilling Landgraf wrote:
>
> On 09/22/2015 12:27 AM, Budur Nagaraju wrote:
> >Below is the format I have updated ,
> >but still am facing the same issues.
To: Budur Nagaraju
Please keep all your replies on the mailing list, as the mailing list
archives are there to help others who may have the same problem in
future. If you prefer to have personal help, you can pay for a
Red Hat subscription.
Please also update to the new virt-v2v version, as described in my
previous email. The old version is unmaintained, and may not even
work with oVirt (I don't know -- no one has tried it for about 3
years).
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html
1
0
Hi list!
Sorry fort the previous E-Mail... problem on my Outlook... :(
Here again...
After a "war-week" I finally got a systemd-script to put the host in
"maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT
work...
I see in the Web console the host will "Preparing for maintenance" and the
VM will start the migration, then the host is in "maintenance" and a couple of
seconds later the VM will be killed on the other host...
In the Log of the engine I see:
2015-09-23 11:14:17,165 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [683624fe] Correlation ID: 52938d4d, Job ID: c0efe5b9-0bc3-4c81-9ee7-63ddf90a6afc, Call Stack: null, Custom Event ID: -1, Message: Mig
ration failed while Host is in 'preparing for maintenance' state.
Consider manual intervention: stopping/migrating Vms as Host's state will not
turn to maintenance while VMs are still running on it.(VM: TestVM, Source: vmhost06, Destination: vmhost03).
2015-09-23 11:14:17,165 INFO [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (org.ovirt.thread.pool-8-thread-30) [683624fe] Lock freed to object EngineLock [exclusiveLocks= key: aabf6e76-8387-4441-a328-6a7dc32e2b4d value: VM
, sharedLocks= ]
(see http://pastebin.com/3Ca8W3vE)
On the host I see these two errors:
libvirtEventLoop::ERROR::2015-09-23 11:14:14,690::task::866::Storage.TaskManager.Task::(_setError) Task=`2670e82a-c9c7-4da6-b6f6-cff7bce25da1`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3209, in inappropriateDevices
fails = supervdsm.getProxy().rmAppropriateRules(thiefId)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in rmAppropriateRules
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 173, in Client
c = SocketClient(address)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 308, in SocketClient
s.connect(address)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
libvirtEventLoop::ERROR::2015-09-23 11:14:14,696::dispatcher::79::Storage.Dispatcher::(wrapper) [Errno 2] No such file or directory
Traceback (most recent call last):
File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/share/vdsm/storage/task.py", line 103, in wrapper
return m(self, *a, **kw)
File "/usr/share/vdsm/storage/task.py", line 1179, in prepare
raise self.error
error: [Errno 2] No such file or directory
libvirtEventLoop::DEBUG::2015-09-23 11:14:14,697::vm::2799::vm.Vm::(setDownStatus) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Changed state to Down: User shut down from within the guest (code=7)
libvirtEventLoop::DEBUG::2015-09-23 11:14:14,698::sampling::425::vm.Vm::(stop) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Stop statistics collection
Thread-891::ERROR::2015-09-23 11:14:14,704::migration::161::vm.Vm::(_recover) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::'NoneType' object has no attribute 'XMLDesc'
Thread-891::WARNING::2015-09-23 11:14:14,712::vm::1966::vm.Vm::(_set_lastStatus) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::trying to set state to Up when already Down
Thread-891::ERROR::2015-09-23 11:14:14,712::migration::260::vm.Vm::(run) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/virt/migration.py", line 231, in run
self._setupRemoteMachineParams()
File "/usr/share/vdsm/virt/migration.py", line 132, in _setupRemoteMachineParams
self._machineParams['_srcDomXML'] = self._vm._dom.XMLDesc(0)
AttributeError: 'NoneType' object has no attribute 'XMLDesc'
Can someone help me finding the problem?
Thanks
Mit freundlichen Grüßen
Luca Bertoncello
--
Besuchen Sie unsere Webauftritte:
www.queo.biz Agentur für Markenführung und Kommunikation
www.queoflow.com IT-Consulting und Individualsoftwareentwicklung
Luca Bertoncello
Administrator
Telefon: +49 351 21 30 38 0
Fax: +49 351 21 30 38 99
E-Mail: l.bertoncello(a)queo-group.com
queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
2
5
This is a multi-part message in MIME format.
--------------090502070204020205010802
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
+ ovirt-users
Some clarity on your setup -
sjcvhost03 - is this your arbiter node and ovirt management node? And
are you running a compute + storage on the same nodes - i.e,
sjcstorage01, sjcstorage02, sjcvhost03 (arbiter).
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587
- fails with Error creating a storage domain's metadata: ("create meta
file 'outbox' failed: [Errno 5] Input/output error",
Are the vdsm logs you provided from sjcvhost03? There are no errors to
be seen in the gluster log you provided. Could you provide mount log
from sjcvhost03 (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most likely)
If possible, /var/log/glusterfs/* from the 3 storage nodes.
thanks
sahina
On 09/23/2015 05:02 AM, Brett Stevens wrote:
> Hi Sahina,
>
> as requested here is some logs taken during a domain create.
>
> 2015-09-22 18:46:44,320 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 2205ff1
>
> 2015-09-22 18:46:44,413 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,418 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
> log id: 2205ff1
>
> 2015-09-22 18:46:45,215 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:45,230 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Running command:
> AddStorageServerConnectionCommand internal: false. Entities affected
> : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:45,233 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='null',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 6a112292
>
> 2015-09-22 18:46:48,065 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] FINISH, ConnectStorageServerVDSCommand,
> return: {00000000-0000-0000-0000-000000000000=0}, log id: 6a112292
>
> 2015-09-22 18:46:48,073 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock freed to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:48,188 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Running command:
> AddGlusterFsStorageDomainCommand internal: false. Entities affected :
> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:48,206 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 38a2b0d
>
> 2015-09-22 18:46:48,219 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] FINISH, ConnectStorageServerVDSCommand,
> return: {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d
>
> 2015-09-22 18:46:48,221 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] START,
> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'}), log id: b9fe587
>
> 2015-09-22 18:46:48,744 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: null, Call Stack: null,
> Custom Event ID: -1, Message: VDSM sjcvhost03 command failed: Error
> creating a storage domain's metadata: ("create meta file 'outbox'
> failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
> return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc
> [code=362, message=Error creating a storage domain's metadata:
> ("create meta file 'outbox' failed: [Errno 5] Input/output error",)]]'
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] HostName = sjcvhost03
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'})' execution failed: VDSGenericException:
> VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS,
> error = Error creating a storage domain's metadata: ("create meta file
> 'outbox' failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,745 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] FINISH, CreateStorageDomainVDSCommand, log
> id: b9fe587
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",) (Failed with error StorageDomainMetadataCreationError and
> code 362)
>
> 2015-09-22 18:46:48,755 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,758 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,769 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>
> 2015-09-22 18:46:48,784 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: 6410419, Job ID:
> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack: null, Custom Event
> ID: -1, Message: Failed to add Storage Domain sjcvmstore. (User:
> admin@internal)
>
> 2015-09-22 18:46:48,996 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,018 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Running command:
> RemoveStorageServerConnectionCommand internal: false. Entities
> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
> SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:49,024 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Removing connection
> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>
> 2015-09-22 18:46:49,026 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] START,
> DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 39d3b568
>
> 2015-09-22 18:46:49,248 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] FINISH,
> DisconnectStorageServerVDSCommand, return:
> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568
>
> 2015-09-22 18:46:49,252 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock freed to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,431 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 17014ae8
>
> 2015-09-22 18:46:49,511 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,515 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
> log id: 17014ae8
>
>
>
> ovirt engine thinks that sjcstorage01 is sjcstorage01, its all testbed
> at the moment and is all short names, defined in /etc/hosts (all
> copied to each server for consistancy)
>
>
> volume info for vmstore is
>
>
> Status of volume: vmstore
>
> Gluster process TCP Port RDMA Port Online Pid
>
> ------------------------------------------------------------------------------
>
> Brick sjcstorage01:/export/vmstore/brick01 49157 0 Y 7444
>
> Brick sjcstorage02:/export/vmstore/brick01 49157 0 Y 4063
>
> Brick sjcvhost02:/export/vmstore/brick01 49156 0 Y 3243
>
> NFS Server on localhost 2049 0 Y 3268
>
> Self-heal Daemon on localhost N/A N/A Y 3284
>
> NFS Server on sjcstorage01 2049 0 Y 7463
>
> Self-heal Daemon on sjcstorage01 N/A N/A Y
> 7472
>
> NFS Server on sjcstorage02 2049 0 Y 4082
>
> Self-heal Daemon on sjcstorage02 N/A N/A Y
> 4090
>
> Task Status of Volume vmstore
>
> ------------------------------------------------------------------------------
>
> There are no active volume tasks
>
>
>
> vdsm logs from time the domain is added
>
>
> hread-789::DEBUG::2015-09-22
> 19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state init ->
> state preparing
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state
> preparing -> state finished
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting False
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52510)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Thread-791::INFO::2015-09-22
> 19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> started
>
> Thread-791::INFO::2015-09-22
> 19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> stopped
>
> Thread-792::DEBUG::2015-09-22
> 19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state init ->
> state preparing
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state
> preparing -> state finished
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting False
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52511)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Thread-794::INFO::2015-09-22
> 19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> started
>
> Thread-794::INFO::2015-09-22
> 19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> stopped
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state init ->
> state preparing
>
> Thread-795::INFO::2015-09-22
> 19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
> None
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o backup-volfile-servers=sjcstorage02:sjcvhost02
> sjcstorage01:/vmstore
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-796::DEBUG::2015-09-22
> 19:12:35,707::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-797::DEBUG::2015-09-22
> 19:12:35,712::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-795::INFO::2015-09-22
> 19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished: {'statuslist':
> [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state
> preparing -> state finished
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting False
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'00000000-0000-0000-0000-000000000000'}]
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state init ->
> state preparing
>
> Thread-798::INFO::2015-09-22
> 19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-798::INFO::2015-09-22
> 19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state
> preparing -> state finished
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting False
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StorageDomain.create' in bridge with {u'name':
> u'sjcvmstore01', u'domainType': 7, u'domainClass': 1, u'typeArgs':
> u'sjcstorage01:/vmstore', u'version': u'3', u'storagedomainID':
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state init ->
> state preparing
>
> Thread-801::INFO::2015-09-22
> 19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and protect:
> createStorageDomain(storageType=7,
> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
> domainName=u'sjcvmstore01', typeSpecificArg=u'sjcstorage01:/vmstore',
> domClass=1, domVersion=u'3', options=None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-802::DEBUG::2015-09-22
> 19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,946::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /usr/sbin/lvm vgs --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
> retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
> WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!\n Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found\n Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>
> Thread-801::WARNING::2015-09-22
> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
> [' WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!', ' Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found', ' Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' released the operation mutex
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
>
> dom = findMethod(sdUUID)
>
> File "/usr/share/vdsm/storage/sdc.py", line 172, in _findUnfetchedDomain
>
> raise se.StorageDomainDoesNotExist(sdUUID)
>
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>
> Thread-801::INFO::2015-09-22
> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 domainName=sjcvmstore01
> remotePath=sjcstorage01:/vmstore domClass=1
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,015::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected error
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
>
> return fn(*args, **kargs)
>
> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>
> res = f(*args, **kwargs)
>
> File "/usr/share/vdsm/storage/hsm.py", line 2697, in createStorageDomain
>
> domVersion)
>
> File "/usr/share/vdsm/storage/nfsSD.py", line 84, in create
>
> remotePath, storageType, version)
>
> File "/usr/share/vdsm/storage/fileSD.py", line 264, in _prepareMetadata
>
> "create meta file '%s' failed: %s" % (metaFile, str(e)))
>
> StorageDomainMetadataCreationError: Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
> u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in state
> preparing (force False)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting True
>
> Thread-801::INFO::2015-09-22
> 19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task is
> aborted: "Error creating a storage domain's metadata" - code 362
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare: aborted: Error
> creating a storage domain's metadata
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting True
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort: force False
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> preparing -> state aborting
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting: recover policy
> none
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> aborting -> state failed
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Error creating a storage domain\'s metadata: ("create
> meta file \'outbox\' failed: [Errno 5] Input/output error",)', 'code':
> 362}}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.disconnectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state init ->
> state preparing
>
> Thread-807::INFO::2015-09-22
> 19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/umount -f -l
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,383::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::INFO::2015-09-22
> 19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer, Return response: {'statuslist': [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state
> preparing -> state finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting False
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.disconnectStorageServer' in bridge with
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state init ->
> state preparing
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state
> preparing -> state finished
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting False
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52512)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Thread-809::INFO::2015-09-22
> 19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> started
>
> Thread-809::INFO::2015-09-22
> 19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> stopped
>
> Thread-810::DEBUG::2015-09-22
> 19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state init ->
> state preparing
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state
> preparing -> state finished
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting False
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52513)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Thread-812::INFO::2015-09-22
> 19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> started
>
> Thread-812::INFO::2015-09-22
> 19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> stopped
>
> Thread-813::DEBUG::2015-09-22
> 19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state init ->
> state preparing
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state
> preparing -> state finished
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting False
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52515)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Thread-815::INFO::2015-09-22
> 19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> started
>
> Thread-815::INFO::2015-09-22
> 19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> stopped
>
> Thread-816::DEBUG::2015-09-22
> 19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
>
>
> gluster logs
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:29:07.586205] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586325] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586480] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume :
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.608758] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:29:07.609868] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
> [dht-layout.c:702:dht_layout_normalize] 0-vmstore-dht: Found anomalies
> in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0
>
> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of / with [Subvol_name: vmstore-replicate-0, Err: -1 ,
> Start: 0 , Stop: 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.122415] W [fuse-bridge.c:1230:fuse_err_cbk]
> 0-glusterfs-fuse: 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
> available)
>
> [2015-09-22 05:29:08.137359] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2 with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:30:57.923087] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.925701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.927984] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:30:57.934021] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934145] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934491] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.953887] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:30:57.955282] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:30:57.970232] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:57.970834] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f1872a09785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f1872a09609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:57.970848] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:30:58.420973] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:58.421355] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f826933e785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f826933e609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:58.421369] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:09.553019] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.555552] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.557989] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:31:09.563262] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563431] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563877] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-1' came back up; going online.
>
> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopen:
>
> [2015-09-22 05:31:09.577192] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:31:09.578125] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:31:10.073698] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:31:10.073977] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f6b9d0f2785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f6b9d0f2609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:31:10.073993] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:20.200701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.203110] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.205708] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
>
>
> Hope this helps.
>
>
> thanks again
>
>
> Brett Stevens
>
>
>
> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
>
>
> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>> Hi. First time on the lists. I've searched for this but no luck
>> so sorry if this has been covered before.
>>
>> Im working with the latest 3.6 beta with the following
>> infrastructure.
>>
>> 1 management host (to be used for a number of tasks so chose not
>> to use self hosted, we are a school and will need to keep an eye
>> on hardware costs)
>> 2 compute nodes
>> 2 gluster nodes
>>
>> so far built one gluster volume using the gluster cli to give me
>> 2 nodes and one arbiter node (management host)
>>
>> so far, every time I create a volume, it shows up strait away on
>> the ovirt gui. however no matter what I try, I cannot create or
>> import it as a data domain.
>>
>> the current error in the ovirt gui is "Error while executing
>> action AddGlusterFsStorageDomain: Error creating a storage
>> domain's metadata"
>
> Please provide vdsm and gluster logs
>
>>
>> logs, continuously rolling the following errors around
>>
>> Scheduler_Worker-53) [] START,
>> GlusterVolumesListVDSCommand(HostName = sjcstorage02,
>> GlusterVolumesListVDSParameters:{runAsync='true',
>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
>>
>> 2015-09-22 03:57:29,903 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage01:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>
> What is the hostname provided in ovirt engine for sjcstorage01 ?
> Does this host have multiple nics?
>
> Could you provide output of gluster volume info?
> Please note, that these errors are not related to error in
> creating storage domain. However, these errors could prevent you
> from monitoring the state of gluster volume from oVirt
>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage02:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not add brick
>> 'sjcvhost02:/export/vmstore/brick01' to volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>> GlusterVolumesListVDSCommand, return:
>> {878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>> log id: 24198fbf
>>
>>
>> I'm new to ovirt and gluster, so any help would be great
>>
>>
>> thanks
>>
>>
>> Brett Stevens
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------090502070204020205010802
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
+ ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span class="">sjcvhost03 - is this your arbiter node and ovirt
management node? And are you running a compute + storage on the
same nodes - i.e, </span><span class="">sjcstorage01, </span><span
class="">sjcstorage02, </span><span class="">sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span class=""> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587<br>
<br>
- fails with </span><span class="">Error creating a storage
domain's metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you provided from </span><span class="">sjcvhost03?
There are no errors to be seen in the gluster log you provided.
Could you provide mount log from </span><span class=""><span
class="">sjcvhost03</span> (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most
likely)<br>
If possible, /var/log/glusterfs/* from the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div class="moz-cite-prefix">On 09/23/2015 05:02 AM, Brett Stevens
wrote:<br>
</div>
<blockquote
cite="mid:CAK02sjsh7JXf56xuMSEW_knZcNem9FNsdjEhd3NAQOQiLjeTrA@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is some logs taken during a domain
create.</div>
<div><br>
</div>
<div>
<p class=""><span class="">2015-09-22 18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Running command:
AddStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] FINISH,
ConnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Running command:
AddGlusterFsStorageDomainCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type
ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] FINISH,
ConnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] START,
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM sjcvhost03
command failed: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value 'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc [code=362, message=Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)]]'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] HostName = sjcvhost03</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})' execution failed:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] FINISH,
CreateStorageDomainVDSCommand, log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",) (Failed
with error StorageDomainMetadataCreationError and code
362)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Transaction rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: 6410419, Job
ID: 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
null, Custom Event ID: -1, Message: Failed to add Storage
Domain sjcvmstore. (User: admin@internal)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Running command:
RemoveStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Removing connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database </span></p>
<p class=""><span class="">2015-09-22 18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] START,
DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] FINISH,
DisconnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
17014ae8</span></p>
<p class=""><span class="">2015-09-22 18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">ovirt engine thinks that
sjcstorage01 is sjcstorage01, its all testbed at the
moment and is all short names, defined in /etc/hosts (all
copied to each server for consistancy)</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">volume info for vmstore is</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Status of volume: vmstore</span></p>
<p class=""><span class="">Gluster process
TCP Port RDMA Port Online Pid</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class=""><span class="">Brick
sjcstorage01:/export/vmstore/brick01 49157 0
Y 7444 </span></p>
<p class=""><span class="">Brick
sjcstorage02:/export/vmstore/brick01 49157 0
Y 4063 </span></p>
<p class=""><span class="">Brick
sjcvhost02:/export/vmstore/brick01 49156 0
Y 3243 </span></p>
<p class=""><span class="">NFS Server on localhost
2049 0 Y 3268 </span></p>
<p class=""><span class="">Self-heal Daemon on localhost
N/A N/A Y 3284 </span></p>
<p class=""><span class="">NFS Server on sjcstorage01
2049 0 Y 7463 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage01
N/A N/A Y 7472 </span></p>
<p class=""><span class="">NFS Server on sjcstorage02
2049 0 Y 4082 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage02
N/A N/A Y 4090 </span></p>
<p class=""><span class=""> </span></p>
<p class=""><span class="">Task Status of Volume vmstore</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class="">
</p>
<p class=""><span class="">There are no active volume tasks</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">vdsm logs from time the domain is
added</span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state init -> state preparing</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state preparing -> state finished</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting
False</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52510)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> started</p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> stopped</p>
<p class="">Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state init -> state preparing</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state preparing -> state finished</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting
False</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52511)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> started</p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> stopped</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state init -> state preparing</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'00000000-0000-0000-0000-000000000000',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
None</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state preparing -> state finished</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting
False</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state init -> state preparing</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state preparing -> state finished</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting
False</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StorageDomain.create' in bridge with {u'name':
u'sjcvmstore01', u'domainType': 7, u'domainClass': 1,
u'typeArgs': u'sjcstorage01:/vmstore', u'version': u'3',
u'storagedomainID': u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state init -> state preparing</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and
protect: createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore', domClass=1,
domVersion=u'3', options=None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices {
preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0
filter = [ '\''r|.*|'\'' ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 }
backup { retain_min = 50 retain_days = 0 } ' --noheadings
--units b --nosuffix --separator '|' --ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED:
<err> = ' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!\n Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5</p>
<p class="">Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs
failed: 5 [] [' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!', ' Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found', ' Cannot
process volume group c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' released the operation
mutex</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py", line 142,
in _findDomain</p>
<p class=""> dom = findMethod(sdUUID)</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py", line 172,
in _findUnfetchedDomain</p>
<p class=""> raise se.StorageDomainDoesNotExist(sdUUID)</p>
<p class="">StorageDomainDoesNotExist: Storage domain does not
exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01 remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/task.py", line
873, in _run</p>
<p class=""> return fn(*args, **kargs)</p>
<p class=""> File "/usr/share/vdsm/logUtils.py", line 49, in
wrapper</p>
<p class=""> res = f(*args, **kwargs)</p>
<p class=""> File "/usr/share/vdsm/storage/hsm.py", line
2697, in createStorageDomain</p>
<p class=""> domVersion)</p>
<p class=""> File "/usr/share/vdsm/storage/nfsSD.py", line
84, in create</p>
<p class=""> remotePath, storageType, version)</p>
<p class=""> File "/usr/share/vdsm/storage/fileSD.py", line
264, in _prepareMetadata</p>
<p class=""> "create meta file '%s' failed: %s" %
(metaFile, str(e)))</p>
<p class="">StorageDomainMetadataCreationError: Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in
state preparing (force False)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting
True</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task
is aborted: "Error creating a storage domain's metadata" -
code 362</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error creating a storage domain's metadata</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting
True</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state preparing -> state aborting</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state aborting -> state failed</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': 'Error creating a storage domain\'s
metadata: ("create meta file \'outbox\' failed: [Errno 5]
Input/output error",)', 'code': 362}}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.disconnectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state init -> state preparing</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and
protect: disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and
protect: disconnectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state preparing -> state finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting
False</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.disconnectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state init -> state preparing</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state preparing -> state finished</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting
False</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52512)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> started</p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> stopped</p>
<p class="">Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state init -> state preparing</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state preparing -> state finished</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting
False</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52513)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> started</p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> stopped</p>
<p class="">Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state init -> state preparing</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state preparing -> state finished</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting
False</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52515)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> started</p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> stopped</p>
<p class=""><span class=""></span></p>
<p class="">Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class="">52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596044] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume :</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.616577] I
[MSGID: 109063] [dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found anomalies in / (gfid =
00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of / with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse: 26:
REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
available)</span></p>
<p class=""><span class="">[2015-09-22 05:29:08.137359] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2 with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 ,
Stop: 4294967295 , Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:30:57.897819] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.923087] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.925701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.927984] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class=""><span class=""> 52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943036] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970848] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421369] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.534410] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.553019] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.555552] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.557989] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class="">
</p>
<p class=""><span class=""> 52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573395] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-1' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopen:</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073993] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.184700] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.200701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.203110] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.205708] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Hope this helps. </span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">thanks again</p>
<p class=""><br>
</p>
<p class="">Brett Stevens</p>
<p class=""><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 22, 2015 at 10:14 PM,
Sahina Bose <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank">sabose(a)redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class=""> <br>
<br>
<div>On 09/22/2015 02:17 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi. First time on the lists. I've
searched for this but no luck so sorry if this has
been covered before.
<div><br>
</div>
<div>Im working with the latest 3.6 beta with the
following infrastructure. </div>
<div><br>
</div>
<div>1 management host (to be used for a number of
tasks so chose not to use self hosted, we are a
school and will need to keep an eye on hardware
costs)</div>
<div>2 compute nodes</div>
<div>2 gluster nodes</div>
<div><br>
</div>
<div>so far built one gluster volume using the
gluster cli to give me 2 nodes and one arbiter
node (management host)</div>
<div><br>
</div>
<div>so far, every time I create a volume, it shows
up strait away on the ovirt gui. however no matter
what I try, I cannot create or import it as a data
domain. </div>
<div><br>
</div>
<div>the current error in the ovirt gui is "Error
while executing action AddGlusterFsStorageDomain:
Error creating a storage domain's metadata"</div>
</div>
</blockquote>
<br>
</span> Please provide vdsm and gluster logs<span class=""><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs, continuously rolling the following errors
around</div>
<div>
<p><span>Scheduler_Worker-53) [] START,
GlusterVolumesListVDSCommand(HostName =
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 24198fbf</span></p>
<p><span>2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage01:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the hostname provided in ovirt engine for
<span>sjcstorage01 ? Does this host have multiple nics?<br>
<br>
Could you provide output of gluster volume info?<br>
Please note, that these errors are not related to error
in creating storage domain. However, these errors could
prevent you from monitoring the state of gluster volume
from oVirt<br>
<br>
</span>
<blockquote type="cite"><span class="">
<div dir="ltr">
<div>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage02:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' -
server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH,
GlusterVolumesListVDSCommand, return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm new to ovirt and gluster, so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span class="">
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------090502070204020205010802--
1
0
Hi list!
After a "war-week" I finally got a systemd-script to put the host in "maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT work...
I see in the Web console the host will "Preparing for maintenance" and the VM will start the migration, then the host is in "maintenance" and a couple of seconds later the VM will be killed on the other host...
In the Log of the engine I see
1
0
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I have tried several times to import a VMware VM from a vcenter installati=
on but it just seems to hang when I hit the "load" button. I have had a l=
ook at the vdsm.log but there is so much going on (presumably due to debug=
) it is hard to distinguish what is happening. Does anyone have any pointe=
rs on how to work out what is going on?
Best regards
Ian Fraser
________________________________
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us of incorrect receipt and destroy this=
message and any attachments.
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other than the named recipient or sender.
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems.=20=
You should make your own checks and ASM (UK) Ltd does not accept liability=
in respect of viruses or computer problems experienced.
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ
Registered in England No.2053849
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-mic=
rosoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"ht=
tp://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii=
">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09margin-bottom:.0001pt;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
=09{mso-style-priority:99;
=09color:#0563C1;
=09text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
=09{mso-style-priority:99;
=09color:#954F72;
=09text-decoration:underline;}
span.EmailStyle17
=09{mso-style-type:personal-compose;
=09font-family:"Calibri",sans-serif;
=09color:windowtext;}
.MsoChpDefault
=09{mso-style-type:export-only;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
=09{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-GB" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I have tried several times to import a VMware VM fr=
om a vcenter installation but it just seems to hang when I hit the “=
load” button. I have had a look at the vdsm.log but there is s=
o much going on (presumably due to debug) it is hard to
distinguish what is happening. Does anyone have any pointers on how to wo=
rk out what is going on?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB">Best regards<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"color:#17365D;mso-fareast-languag=
e:EN-GB">Ian Fraser</span></b><span style=3D"color:#17365D;mso-fareast-lan=
guage:EN-GB"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us
of incorrect receipt and destroy this message and any attachments.<br>
<br>
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other=20than the named recipient or sender.<=
br>
<br>
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems. Yo=
u should make your own checks and ASM (UK) Ltd does not accept liability i=
n respect of viruses or computer problems
experienced.<br>
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ<br>
Registered in England No.2053849<br>
</font>
<br clear=3D"both">
______________________________________________________________________<BR>=
This email has been scanned by the Symantec Email Security.cloud service.<=
BR>
For more information please visit http://www.symanteccloud.com<BR>
______________________________________________________________________<BR>=
</body>
</html>
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_--
2
5