Users
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 10 participants
- 19138 discussions

[ANN] oVirt 3.5.5 Second Release Candidate is now available for testing
by Sandro Bonazzola 24 Sep '15
by Sandro Bonazzola 24 Sep '15
24 Sep '15
The oVirt Project is pleased to announce the availability
of the Second Release Candidate of oVirt 3.5.5 for testing, as of September
24th, 2015.
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar),
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar) and #Fedora 21.
This release of oVirt 3.5.5 includes new DWH and reports packages.
See the release notes [1] for an initial list of bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Node ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
Please test add yourself to the test page[4] if you're testing this release.
[1] http://www.ovirt.org/OVirt_3.5.5_Release_Notes
[2] http://plain.resources.ovirt.org/pub/ovirt-3.5-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
[4] http://www.ovirt.org/Testing/oVirt_3.5.5_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hi,
last Sunday I experienced a power outage with one of my two oVirt
hypervisors. After power was restored I experienced some weirdness:
- on one of the VMs running on this hypervisor the boot disk changed, so
it was no longer able to boot. Looking at the console the VM would hang
on "Booting from hard disk". After I noticed that the wrong virtual disk
was marked as OS/bootable I got it booting again after correcting it to
the proper boot disk. This was done from the oVirt management server.
- on another VM I tried today to add another virtual disk to expand a
LVM volume. In dmesg I can see the new device:
[17167560.005768] vdc: unknown partition table
However, when I tried to run pvcreate I got an error message saying that
this was already marked as an LVM disk, and then running pvs give me the
following error:
# pvs
Couldn't find device with uuid 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7.
PV VG Fmt Attr PSize PFree
/dev/vda5 rit-kvm-ssweb02 lvm2 a-- 59.76g 0
/dev/vdb1 vg_syncshare lvm2 a-- 500.00g 0
/dev/vdc1 VG_SYNCSHARE01 lvm2 a-- 400.00g 0
unknown device VG_SYNCSHARE01 lvm2 a-m 1024.00g 0
As you can see there's already a PV called /dev/vdc1, as well another
one named "unknown device". These two PVs belong to a VG that does NOT
belong to this VM, VG_SYNCSHARE01. The uuids for these two PVs are:
--- Physical volume ---
PV Name unknown device
VG Name VG_SYNCSHARE01
PV Size 1024.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 262143
Free PE 0
Allocated PE 262143
PV UUID 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7
--- Physical volume ---
PV Name /dev/vdc1
VG Name VG_SYNCSHARE01
PV Size 400.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 102399
Free PE 0
Allocated PE 102399
PV UUID oKSDoo-3pxU-0uue-zQ0H-kv1N-lyPa-P2M2FY
The two PVs which doesn't belong on this VM actually belongs to a
totally different VM.
On VM number two:
# pvdisplay
--- Physical volume ---
PV Name /dev/vdb1
VG Name VG_SYNCSHARE01
PV Size 1024.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 262143
Free PE 0
Allocated PE 262143
PV UUID 7dDcyq-TZ6I-96Im-lfjL-cTUv-nff1-B11Mm7
--- Physical volume ---
PV Name /dev/vdd1
VG Name VG_SYNCSHARE01
PV Size 400.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 102399
Free PE 0
Allocated PE 102399
PV UUID oKSDoo-3pxU-0uue-zQ0H-kv1N-lyPa-P2M2FY
As you can see, same uuid and VG name, but two different VMs.
My setup:
oVirt manager: oVirt 3.5 running on CentOS 6.7
oVirt hypervisors: two oVirt 3.5 servers running on CentOS 6.7
During the time of the power outage mentioned earlier I was running
oVirt 3.4, but I upgraded today and rebooted the manager and both
hypervisors, but NOT the VMs.
Virtual machines:
Debian wheezy 7.9 x86_64
Storage:
HP LeftHand iSCSI
I have tried to locate error messages in the logs which can be related
to this behaviour, but so far no luck :(
--
Morten A. Middelthon
Email: morten(a)flipp.net
1
0
Hi
can you pls provide me the tools to backup and restore the vm images in
Ovirt?
Thanks,
Nagaraju
2
1

Not able to resume a VM which was paused because of gluster quorum issue
by Ramesh Nachimuthu 24 Sep '15
by Ramesh Nachimuthu 24 Sep '15
24 Sep '15
This is a multi-part message in MIME format.
--------------050502080707090302020409
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I am not able to resume a VM which was paused because of gluster
client quorum issue. Here is what happened in my setup.
1. Created a gluster storage domain which is backed by gluster volume
with replica 3.
2. Killed one brick process. So only two bricks are running in replica 3
setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount and VMs
moved to paused state.
" server 10.70.45.17:49217 has not responded in the last 42
seconds, disconnecting."
"vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6:
Failing WRITE as quorum is not met"
more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able to write the
the gluster file system.
7. When I try to resume the VM it doesn't work and I got following error
in vdsm log.
http://pastebin.com/aXiamY15
Regards,
Ramesh
--------------050502080707090302020409
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi,<br>
<br>
I am not able to resume a VM which was paused because of gluster
client quorum issue. Here is what happened in my setup. <br>
<br>
1. Created a gluster storage domain which is backed by gluster
volume with replica 3. <br>
2. Killed one brick process. So only two bricks are running in
replica 3 setup.<br>
3. Created two VMs<br>
4. Started some IO using fio on both of the VMs<br>
5. After some time got the following error in gluster mount and VMs
moved to paused state.<br>
"
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<span style="color: rgb(51, 51, 51); font-family: monospace;
font-size: 11px; font-style: normal; font-variant: normal;
font-weight: normal; letter-spacing: normal; line-height: 13.2px;
orphans: auto; text-align: left; text-indent: 0px; text-transform:
none; white-space: normal; widows: 1; word-spacing: 0px;
-webkit-text-stroke-width: 0px; display: inline !important; float:
none; background-color: rgb(255, 255, 255);">server
10.70.45.17:49217 has not responded in the last 42 seconds,
disconnecting."<br>
"</span><span style="color: rgb(51, 51, 51); font-family:
monospace; font-size: 11px; font-style: normal; font-variant:
normal; font-weight: normal; letter-spacing: normal; line-height:
13.2px; orphans: auto; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 1;
word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline
!important; float: none; background-color: rgb(255, 255, 255);"><span
style="color: rgb(51, 51, 51); font-family: monospace;
font-size: 11px; font-style: normal; font-variant: normal;
font-weight: normal; letter-spacing: normal; line-height:
13.2px; orphans: auto; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 1;
word-spacing: 0px; -webkit-text-stroke-width: 0px; display:
inline !important; float: none; background-color: rgb(255, 255,
255);">vmstore-replicate-0:
e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing WRITE as quorum is
not met</span>"<br>
more gluster mount logs at <a class="moz-txt-link-freetext" href="http://pastebin.com/UmiUQq0F">http://pastebin.com/UmiUQq0F</a><br>
</span>6. After some time gluster quorum is active and I am able to
write the the gluster file system.<br>
7. When I try to resume the VM it doesn't work and I got following
error in vdsm log.<br>
<a class="moz-txt-link-freetext" href="http://pastebin.com/aXiamY15">http://pastebin.com/aXiamY15</a><br>
<br>
<br>
Regards,<br>
Ramesh<br>
<br>
</body>
</html>
--------------050502080707090302020409--
4
10
HI All,
Can someone help me in configuring LDAP authentication for Ovirt ?
Thanks,,
Nagaraju
4
22
I'm exporting a VM as part of testing backing up VMs and this 75GB VM has
been exporting for 3+ hours. The storage is running gluster on 10GbE so
bandwidth isn't the issue. Importing these VMs from the same export took
roughly 10 minutes. But I'm not seeing any errors. The only thing in the
engine.log is:
/var/log/ovirt-engine/engine.log:2015-09-21 13:16:14,459 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-6) [3ab846b4] Correlation ID: 1db67021, Job
ID: b348e823-fb6a-42b8-8561-8ab6047a27c7, Call Stack: null, Custom Event
ID: -1, Message: Starting export Vm asdf to export
Is there a way to kill the export?
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
3
4
------=_Part_789470_851827211.1442587518543
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Is there any documentation about FreeIPA integration with oVirt 3.5 and how to configure it?
Thanks
Jose
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_789470_851827211.1442587518543
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: Times New Roman; font-size: 10pt; color: #000000"><div>Hi,<br></div><div><br></div><div>Is there any documentation about FreeIPA integration with oVirt 3.5 and how to configure it?<br></div><div><br></div><div>Thanks<br></div><div><br></div><div>Jose<br></div><div><br></div><div>-- <br></div><div><span name="x"></span><hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_789470_851827211.1442587518543--
4
12

23 Sep '15
On Wed, Sep 23, 2015 at 12:14:02PM -0400, Douglas Schilling Landgraf wrote:
>
> On 09/22/2015 12:27 AM, Budur Nagaraju wrote:
> >Below is the format I have updated ,
> >but still am facing the same issues.
To: Budur Nagaraju
Please keep all your replies on the mailing list, as the mailing list
archives are there to help others who may have the same problem in
future. If you prefer to have personal help, you can pay for a
Red Hat subscription.
Please also update to the new virt-v2v version, as described in my
previous email. The old version is unmaintained, and may not even
work with oVirt (I don't know -- no one has tried it for about 3
years).
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html
1
0
Hi list!
Sorry fort the previous E-Mail... problem on my Outlook... :(
Here again...
After a "war-week" I finally got a systemd-script to put the host in
"maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT
work...
I see in the Web console the host will "Preparing for maintenance" and the
VM will start the migration, then the host is in "maintenance" and a couple of
seconds later the VM will be killed on the other host...
In the Log of the engine I see:
2015-09-23 11:14:17,165 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [683624fe] Correlation ID: 52938d4d, Job ID: c0efe5b9-0bc3-4c81-9ee7-63ddf90a6afc, Call Stack: null, Custom Event ID: -1, Message: Mig
ration failed while Host is in 'preparing for maintenance' state.
Consider manual intervention: stopping/migrating Vms as Host's state will not
turn to maintenance while VMs are still running on it.(VM: TestVM, Source: vmhost06, Destination: vmhost03).
2015-09-23 11:14:17,165 INFO [org.ovirt.engine.core.bll.InternalMigrateVmCommand] (org.ovirt.thread.pool-8-thread-30) [683624fe] Lock freed to object EngineLock [exclusiveLocks= key: aabf6e76-8387-4441-a328-6a7dc32e2b4d value: VM
, sharedLocks= ]
(see http://pastebin.com/3Ca8W3vE)
On the host I see these two errors:
libvirtEventLoop::ERROR::2015-09-23 11:14:14,690::task::866::Storage.TaskManager.Task::(_setError) Task=`2670e82a-c9c7-4da6-b6f6-cff7bce25da1`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3209, in inappropriateDevices
fails = supervdsm.getProxy().rmAppropriateRules(thiefId)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in rmAppropriateRules
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 173, in Client
c = SocketClient(address)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 308, in SocketClient
s.connect(address)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
libvirtEventLoop::ERROR::2015-09-23 11:14:14,696::dispatcher::79::Storage.Dispatcher::(wrapper) [Errno 2] No such file or directory
Traceback (most recent call last):
File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/share/vdsm/storage/task.py", line 103, in wrapper
return m(self, *a, **kw)
File "/usr/share/vdsm/storage/task.py", line 1179, in prepare
raise self.error
error: [Errno 2] No such file or directory
libvirtEventLoop::DEBUG::2015-09-23 11:14:14,697::vm::2799::vm.Vm::(setDownStatus) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Changed state to Down: User shut down from within the guest (code=7)
libvirtEventLoop::DEBUG::2015-09-23 11:14:14,698::sampling::425::vm.Vm::(stop) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Stop statistics collection
Thread-891::ERROR::2015-09-23 11:14:14,704::migration::161::vm.Vm::(_recover) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::'NoneType' object has no attribute 'XMLDesc'
Thread-891::WARNING::2015-09-23 11:14:14,712::vm::1966::vm.Vm::(_set_lastStatus) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::trying to set state to Up when already Down
Thread-891::ERROR::2015-09-23 11:14:14,712::migration::260::vm.Vm::(run) vmId=`aabf6e76-8387-4441-a328-6a7dc32e2b4d`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/virt/migration.py", line 231, in run
self._setupRemoteMachineParams()
File "/usr/share/vdsm/virt/migration.py", line 132, in _setupRemoteMachineParams
self._machineParams['_srcDomXML'] = self._vm._dom.XMLDesc(0)
AttributeError: 'NoneType' object has no attribute 'XMLDesc'
Can someone help me finding the problem?
Thanks
Mit freundlichen Grüßen
Luca Bertoncello
--
Besuchen Sie unsere Webauftritte:
www.queo.biz Agentur für Markenführung und Kommunikation
www.queoflow.com IT-Consulting und Individualsoftwareentwicklung
Luca Bertoncello
Administrator
Telefon: +49 351 21 30 38 0
Fax: +49 351 21 30 38 99
E-Mail: l.bertoncello(a)queo-group.com
queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
2
5
This is a multi-part message in MIME format.
--------------090502070204020205010802
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
+ ovirt-users
Some clarity on your setup -
sjcvhost03 - is this your arbiter node and ovirt management node? And
are you running a compute + storage on the same nodes - i.e,
sjcstorage01, sjcstorage02, sjcvhost03 (arbiter).
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587
- fails with Error creating a storage domain's metadata: ("create meta
file 'outbox' failed: [Errno 5] Input/output error",
Are the vdsm logs you provided from sjcvhost03? There are no errors to
be seen in the gluster log you provided. Could you provide mount log
from sjcvhost03 (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most likely)
If possible, /var/log/glusterfs/* from the 3 storage nodes.
thanks
sahina
On 09/23/2015 05:02 AM, Brett Stevens wrote:
> Hi Sahina,
>
> as requested here is some logs taken during a domain create.
>
> 2015-09-22 18:46:44,320 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 2205ff1
>
> 2015-09-22 18:46:44,413 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,418 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
> log id: 2205ff1
>
> 2015-09-22 18:46:45,215 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:45,230 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Running command:
> AddStorageServerConnectionCommand internal: false. Entities affected
> : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:45,233 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='null',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 6a112292
>
> 2015-09-22 18:46:48,065 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] FINISH, ConnectStorageServerVDSCommand,
> return: {00000000-0000-0000-0000-000000000000=0}, log id: 6a112292
>
> 2015-09-22 18:46:48,073 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock freed to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:48,188 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Running command:
> AddGlusterFsStorageDomainCommand internal: false. Entities affected :
> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:48,206 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 38a2b0d
>
> 2015-09-22 18:46:48,219 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] FINISH, ConnectStorageServerVDSCommand,
> return: {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d
>
> 2015-09-22 18:46:48,221 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] START,
> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'}), log id: b9fe587
>
> 2015-09-22 18:46:48,744 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: null, Call Stack: null,
> Custom Event ID: -1, Message: VDSM sjcvhost03 command failed: Error
> creating a storage domain's metadata: ("create meta file 'outbox'
> failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
> return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc
> [code=362, message=Error creating a storage domain's metadata:
> ("create meta file 'outbox' failed: [Errno 5] Input/output error",)]]'
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] HostName = sjcvhost03
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'})' execution failed: VDSGenericException:
> VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS,
> error = Error creating a storage domain's metadata: ("create meta file
> 'outbox' failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,745 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] FINISH, CreateStorageDomainVDSCommand, log
> id: b9fe587
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",) (Failed with error StorageDomainMetadataCreationError and
> code 362)
>
> 2015-09-22 18:46:48,755 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,758 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,769 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>
> 2015-09-22 18:46:48,784 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: 6410419, Job ID:
> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack: null, Custom Event
> ID: -1, Message: Failed to add Storage Domain sjcvmstore. (User:
> admin@internal)
>
> 2015-09-22 18:46:48,996 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,018 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Running command:
> RemoveStorageServerConnectionCommand internal: false. Entities
> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
> SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:49,024 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Removing connection
> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>
> 2015-09-22 18:46:49,026 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] START,
> DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 39d3b568
>
> 2015-09-22 18:46:49,248 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] FINISH,
> DisconnectStorageServerVDSCommand, return:
> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568
>
> 2015-09-22 18:46:49,252 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock freed to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,431 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 17014ae8
>
> 2015-09-22 18:46:49,511 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,515 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
> log id: 17014ae8
>
>
>
> ovirt engine thinks that sjcstorage01 is sjcstorage01, its all testbed
> at the moment and is all short names, defined in /etc/hosts (all
> copied to each server for consistancy)
>
>
> volume info for vmstore is
>
>
> Status of volume: vmstore
>
> Gluster process TCP Port RDMA Port Online Pid
>
> ------------------------------------------------------------------------------
>
> Brick sjcstorage01:/export/vmstore/brick01 49157 0 Y 7444
>
> Brick sjcstorage02:/export/vmstore/brick01 49157 0 Y 4063
>
> Brick sjcvhost02:/export/vmstore/brick01 49156 0 Y 3243
>
> NFS Server on localhost 2049 0 Y 3268
>
> Self-heal Daemon on localhost N/A N/A Y 3284
>
> NFS Server on sjcstorage01 2049 0 Y 7463
>
> Self-heal Daemon on sjcstorage01 N/A N/A Y
> 7472
>
> NFS Server on sjcstorage02 2049 0 Y 4082
>
> Self-heal Daemon on sjcstorage02 N/A N/A Y
> 4090
>
> Task Status of Volume vmstore
>
> ------------------------------------------------------------------------------
>
> There are no active volume tasks
>
>
>
> vdsm logs from time the domain is added
>
>
> hread-789::DEBUG::2015-09-22
> 19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state init ->
> state preparing
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state
> preparing -> state finished
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting False
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52510)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Thread-791::INFO::2015-09-22
> 19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> started
>
> Thread-791::INFO::2015-09-22
> 19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> stopped
>
> Thread-792::DEBUG::2015-09-22
> 19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state init ->
> state preparing
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state
> preparing -> state finished
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting False
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52511)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Thread-794::INFO::2015-09-22
> 19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> started
>
> Thread-794::INFO::2015-09-22
> 19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> stopped
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state init ->
> state preparing
>
> Thread-795::INFO::2015-09-22
> 19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
> None
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o backup-volfile-servers=sjcstorage02:sjcvhost02
> sjcstorage01:/vmstore
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-796::DEBUG::2015-09-22
> 19:12:35,707::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-797::DEBUG::2015-09-22
> 19:12:35,712::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-795::INFO::2015-09-22
> 19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished: {'statuslist':
> [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state
> preparing -> state finished
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting False
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'00000000-0000-0000-0000-000000000000'}]
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state init ->
> state preparing
>
> Thread-798::INFO::2015-09-22
> 19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-798::INFO::2015-09-22
> 19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state
> preparing -> state finished
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting False
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StorageDomain.create' in bridge with {u'name':
> u'sjcvmstore01', u'domainType': 7, u'domainClass': 1, u'typeArgs':
> u'sjcstorage01:/vmstore', u'version': u'3', u'storagedomainID':
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state init ->
> state preparing
>
> Thread-801::INFO::2015-09-22
> 19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and protect:
> createStorageDomain(storageType=7,
> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
> domainName=u'sjcvmstore01', typeSpecificArg=u'sjcstorage01:/vmstore',
> domClass=1, domVersion=u'3', options=None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-802::DEBUG::2015-09-22
> 19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,946::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /usr/sbin/lvm vgs --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
> retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
> WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!\n Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found\n Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>
> Thread-801::WARNING::2015-09-22
> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
> [' WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!', ' Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found', ' Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' released the operation mutex
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
>
> dom = findMethod(sdUUID)
>
> File "/usr/share/vdsm/storage/sdc.py", line 172, in _findUnfetchedDomain
>
> raise se.StorageDomainDoesNotExist(sdUUID)
>
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>
> Thread-801::INFO::2015-09-22
> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 domainName=sjcvmstore01
> remotePath=sjcstorage01:/vmstore domClass=1
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,015::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected error
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
>
> return fn(*args, **kargs)
>
> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>
> res = f(*args, **kwargs)
>
> File "/usr/share/vdsm/storage/hsm.py", line 2697, in createStorageDomain
>
> domVersion)
>
> File "/usr/share/vdsm/storage/nfsSD.py", line 84, in create
>
> remotePath, storageType, version)
>
> File "/usr/share/vdsm/storage/fileSD.py", line 264, in _prepareMetadata
>
> "create meta file '%s' failed: %s" % (metaFile, str(e)))
>
> StorageDomainMetadataCreationError: Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
> u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in state
> preparing (force False)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting True
>
> Thread-801::INFO::2015-09-22
> 19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task is
> aborted: "Error creating a storage domain's metadata" - code 362
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare: aborted: Error
> creating a storage domain's metadata
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting True
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort: force False
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> preparing -> state aborting
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting: recover policy
> none
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> aborting -> state failed
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Error creating a storage domain\'s metadata: ("create
> meta file \'outbox\' failed: [Errno 5] Input/output error",)', 'code':
> 362}}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.disconnectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state init ->
> state preparing
>
> Thread-807::INFO::2015-09-22
> 19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/umount -f -l
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,383::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::INFO::2015-09-22
> 19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer, Return response: {'statuslist': [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state
> preparing -> state finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting False
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.disconnectStorageServer' in bridge with
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state init ->
> state preparing
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state
> preparing -> state finished
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting False
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52512)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Thread-809::INFO::2015-09-22
> 19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> started
>
> Thread-809::INFO::2015-09-22
> 19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> stopped
>
> Thread-810::DEBUG::2015-09-22
> 19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state init ->
> state preparing
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state
> preparing -> state finished
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting False
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52513)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Thread-812::INFO::2015-09-22
> 19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> started
>
> Thread-812::INFO::2015-09-22
> 19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> stopped
>
> Thread-813::DEBUG::2015-09-22
> 19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state init ->
> state preparing
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state
> preparing -> state finished
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting False
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52515)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Thread-815::INFO::2015-09-22
> 19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> started
>
> Thread-815::INFO::2015-09-22
> 19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> stopped
>
> Thread-816::DEBUG::2015-09-22
> 19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
>
>
> gluster logs
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:29:07.586205] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586325] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586480] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume :
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.608758] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:29:07.609868] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
> [dht-layout.c:702:dht_layout_normalize] 0-vmstore-dht: Found anomalies
> in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0
>
> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of / with [Subvol_name: vmstore-replicate-0, Err: -1 ,
> Start: 0 , Stop: 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.122415] W [fuse-bridge.c:1230:fuse_err_cbk]
> 0-glusterfs-fuse: 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
> available)
>
> [2015-09-22 05:29:08.137359] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2 with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:30:57.923087] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.925701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.927984] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:30:57.934021] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934145] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934491] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.953887] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:30:57.955282] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:30:57.970232] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:57.970834] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f1872a09785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f1872a09609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:57.970848] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:30:58.420973] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:58.421355] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f826933e785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f826933e609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:58.421369] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:09.553019] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.555552] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.557989] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:31:09.563262] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563431] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563877] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-1' came back up; going online.
>
> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopen:
>
> [2015-09-22 05:31:09.577192] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:31:09.578125] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:31:10.073698] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:31:10.073977] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f6b9d0f2785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f6b9d0f2609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:31:10.073993] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:20.200701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.203110] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.205708] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
>
>
> Hope this helps.
>
>
> thanks again
>
>
> Brett Stevens
>
>
>
> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
>
>
> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>> Hi. First time on the lists. I've searched for this but no luck
>> so sorry if this has been covered before.
>>
>> Im working with the latest 3.6 beta with the following
>> infrastructure.
>>
>> 1 management host (to be used for a number of tasks so chose not
>> to use self hosted, we are a school and will need to keep an eye
>> on hardware costs)
>> 2 compute nodes
>> 2 gluster nodes
>>
>> so far built one gluster volume using the gluster cli to give me
>> 2 nodes and one arbiter node (management host)
>>
>> so far, every time I create a volume, it shows up strait away on
>> the ovirt gui. however no matter what I try, I cannot create or
>> import it as a data domain.
>>
>> the current error in the ovirt gui is "Error while executing
>> action AddGlusterFsStorageDomain: Error creating a storage
>> domain's metadata"
>
> Please provide vdsm and gluster logs
>
>>
>> logs, continuously rolling the following errors around
>>
>> Scheduler_Worker-53) [] START,
>> GlusterVolumesListVDSCommand(HostName = sjcstorage02,
>> GlusterVolumesListVDSParameters:{runAsync='true',
>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
>>
>> 2015-09-22 03:57:29,903 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage01:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>
> What is the hostname provided in ovirt engine for sjcstorage01 ?
> Does this host have multiple nics?
>
> Could you provide output of gluster volume info?
> Please note, that these errors are not related to error in
> creating storage domain. However, these errors could prevent you
> from monitoring the state of gluster volume from oVirt
>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage02:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not add brick
>> 'sjcvhost02:/export/vmstore/brick01' to volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>> GlusterVolumesListVDSCommand, return:
>> {878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>> log id: 24198fbf
>>
>>
>> I'm new to ovirt and gluster, so any help would be great
>>
>>
>> thanks
>>
>>
>> Brett Stevens
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------090502070204020205010802
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
+ ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span class="">sjcvhost03 - is this your arbiter node and ovirt
management node? And are you running a compute + storage on the
same nodes - i.e, </span><span class="">sjcstorage01, </span><span
class="">sjcstorage02, </span><span class="">sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span class=""> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587<br>
<br>
- fails with </span><span class="">Error creating a storage
domain's metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you provided from </span><span class="">sjcvhost03?
There are no errors to be seen in the gluster log you provided.
Could you provide mount log from </span><span class=""><span
class="">sjcvhost03</span> (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most
likely)<br>
If possible, /var/log/glusterfs/* from the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div class="moz-cite-prefix">On 09/23/2015 05:02 AM, Brett Stevens
wrote:<br>
</div>
<blockquote
cite="mid:CAK02sjsh7JXf56xuMSEW_knZcNem9FNsdjEhd3NAQOQiLjeTrA@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is some logs taken during a domain
create.</div>
<div><br>
</div>
<div>
<p class=""><span class="">2015-09-22 18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Running command:
AddStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] FINISH,
ConnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Running command:
AddGlusterFsStorageDomainCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type
ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] FINISH,
ConnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] START,
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM sjcvhost03
command failed: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value 'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc [code=362, message=Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)]]'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] HostName = sjcvhost03</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})' execution failed:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] FINISH,
CreateStorageDomainVDSCommand, log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",) (Failed
with error StorageDomainMetadataCreationError and code
362)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Transaction rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: 6410419, Job
ID: 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
null, Custom Event ID: -1, Message: Failed to add Storage
Domain sjcvmstore. (User: admin@internal)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Running command:
RemoveStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Removing connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database </span></p>
<p class=""><span class="">2015-09-22 18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] START,
DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] FINISH,
DisconnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
17014ae8</span></p>
<p class=""><span class="">2015-09-22 18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">ovirt engine thinks that
sjcstorage01 is sjcstorage01, its all testbed at the
moment and is all short names, defined in /etc/hosts (all
copied to each server for consistancy)</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">volume info for vmstore is</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Status of volume: vmstore</span></p>
<p class=""><span class="">Gluster process
TCP Port RDMA Port Online Pid</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class=""><span class="">Brick
sjcstorage01:/export/vmstore/brick01 49157 0
Y 7444 </span></p>
<p class=""><span class="">Brick
sjcstorage02:/export/vmstore/brick01 49157 0
Y 4063 </span></p>
<p class=""><span class="">Brick
sjcvhost02:/export/vmstore/brick01 49156 0
Y 3243 </span></p>
<p class=""><span class="">NFS Server on localhost
2049 0 Y 3268 </span></p>
<p class=""><span class="">Self-heal Daemon on localhost
N/A N/A Y 3284 </span></p>
<p class=""><span class="">NFS Server on sjcstorage01
2049 0 Y 7463 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage01
N/A N/A Y 7472 </span></p>
<p class=""><span class="">NFS Server on sjcstorage02
2049 0 Y 4082 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage02
N/A N/A Y 4090 </span></p>
<p class=""><span class=""> </span></p>
<p class=""><span class="">Task Status of Volume vmstore</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class="">
</p>
<p class=""><span class="">There are no active volume tasks</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">vdsm logs from time the domain is
added</span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state init -> state preparing</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state preparing -> state finished</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting
False</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52510)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> started</p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> stopped</p>
<p class="">Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state init -> state preparing</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state preparing -> state finished</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting
False</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52511)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> started</p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> stopped</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state init -> state preparing</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'00000000-0000-0000-0000-000000000000',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
None</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state preparing -> state finished</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting
False</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state init -> state preparing</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state preparing -> state finished</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting
False</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StorageDomain.create' in bridge with {u'name':
u'sjcvmstore01', u'domainType': 7, u'domainClass': 1,
u'typeArgs': u'sjcstorage01:/vmstore', u'version': u'3',
u'storagedomainID': u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state init -> state preparing</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and
protect: createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore', domClass=1,
domVersion=u'3', options=None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices {
preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0
filter = [ '\''r|.*|'\'' ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 }
backup { retain_min = 50 retain_days = 0 } ' --noheadings
--units b --nosuffix --separator '|' --ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED:
<err> = ' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!\n Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5</p>
<p class="">Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs
failed: 5 [] [' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!', ' Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found', ' Cannot
process volume group c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' released the operation
mutex</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py", line 142,
in _findDomain</p>
<p class=""> dom = findMethod(sdUUID)</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py", line 172,
in _findUnfetchedDomain</p>
<p class=""> raise se.StorageDomainDoesNotExist(sdUUID)</p>
<p class="">StorageDomainDoesNotExist: Storage domain does not
exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01 remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/task.py", line
873, in _run</p>
<p class=""> return fn(*args, **kargs)</p>
<p class=""> File "/usr/share/vdsm/logUtils.py", line 49, in
wrapper</p>
<p class=""> res = f(*args, **kwargs)</p>
<p class=""> File "/usr/share/vdsm/storage/hsm.py", line
2697, in createStorageDomain</p>
<p class=""> domVersion)</p>
<p class=""> File "/usr/share/vdsm/storage/nfsSD.py", line
84, in create</p>
<p class=""> remotePath, storageType, version)</p>
<p class=""> File "/usr/share/vdsm/storage/fileSD.py", line
264, in _prepareMetadata</p>
<p class=""> "create meta file '%s' failed: %s" %
(metaFile, str(e)))</p>
<p class="">StorageDomainMetadataCreationError: Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in
state preparing (force False)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting
True</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task
is aborted: "Error creating a storage domain's metadata" -
code 362</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error creating a storage domain's metadata</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting
True</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state preparing -> state aborting</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state aborting -> state failed</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': 'Error creating a storage domain\'s
metadata: ("create meta file \'outbox\' failed: [Errno 5]
Input/output error",)', 'code': 362}}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.disconnectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state init -> state preparing</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and
protect: disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and
protect: disconnectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state preparing -> state finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting
False</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.disconnectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state init -> state preparing</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state preparing -> state finished</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting
False</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52512)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> started</p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> stopped</p>
<p class="">Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state init -> state preparing</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state preparing -> state finished</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting
False</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52513)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> started</p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> stopped</p>
<p class="">Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state init -> state preparing</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state preparing -> state finished</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting
False</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52515)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> started</p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> stopped</p>
<p class=""><span class=""></span></p>
<p class="">Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class="">52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596044] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume :</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.616577] I
[MSGID: 109063] [dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found anomalies in / (gfid =
00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of / with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse: 26:
REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
available)</span></p>
<p class=""><span class="">[2015-09-22 05:29:08.137359] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2 with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 ,
Stop: 4294967295 , Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:30:57.897819] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.923087] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.925701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.927984] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class=""><span class=""> 52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943036] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970848] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421369] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.534410] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.553019] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.555552] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.557989] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume vmstore-client-0</span></p>
<p class=""><span class=""> 2: type protocol/client</span></p>
<p class=""><span class=""> 3: option ping-timeout 42</span></p>
<p class=""><span class=""> 4: option remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option transport-type
socket</span></p>
<p class=""><span class=""> 7: option send-gids true</span></p>
<p class=""><span class=""> 8: end-volume</span></p>
<p class=""><span class=""> 9: </span></p>
<p class=""><span class=""> 10: volume vmstore-client-1</span></p>
<p class=""><span class=""> 11: type protocol/client</span></p>
<p class=""><span class=""> 12: option ping-timeout 42</span></p>
<p class=""><span class=""> 13: option remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option transport-type
socket</span></p>
<p class=""><span class=""> 16: option send-gids true</span></p>
<p class=""><span class=""> 17: end-volume</span></p>
<p class=""><span class=""> 18: </span></p>
<p class=""><span class=""> 19: volume vmstore-client-2</span></p>
<p class=""><span class=""> 20: type protocol/client</span></p>
<p class=""><span class=""> 21: option ping-timeout 42</span></p>
<p class=""><span class=""> 22: option remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option transport-type
socket</span></p>
<p class=""><span class=""> 25: option send-gids true</span></p>
<p class=""><span class=""> 26: end-volume</span></p>
<p class=""><span class=""> 27: </span></p>
<p class=""><span class=""> 28: volume vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type cluster/replicate</span></p>
<p class=""><span class=""> 30: option arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32: end-volume</span></p>
<p class=""><span class=""> 33: </span></p>
<p class=""><span class=""> 34: volume vmstore-dht</span></p>
<p class=""><span class=""> 35: type cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37: end-volume</span></p>
<p class=""><span class=""> 38: </span></p>
<p class=""><span class=""> 39: volume vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes vmstore-dht</span></p>
<p class=""><span class=""> 42: end-volume</span></p>
<p class=""><span class=""> 43: </span></p>
<p class=""><span class=""> 44: volume vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47: end-volume</span></p>
<p class=""><span class=""> 48: </span></p>
<p class=""><span class=""> 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class="">
</p>
<p class=""><span class=""> 52: end-volume</span></p>
<p class=""><span class=""> 53: </span></p>
<p class=""><span class=""> 54: volume vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57: end-volume</span></p>
<p class=""><span class=""> 58: </span></p>
<p class=""><span class=""> 59: volume vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62: end-volume</span></p>
<p class=""><span class=""> 63: </span></p>
<p class=""><span class=""> 64: volume vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67: end-volume</span></p>
<p class=""><span class=""> 68: </span></p>
<p class=""><span class=""> 69: volume vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72: end-volume</span></p>
<p class=""><span class=""> 73: </span></p>
<p class=""><span class=""> 74: volume vmstore</span></p>
<p class=""><span class=""> 75: type debug/io-stats</span></p>
<p class=""><span class=""> 76: option latency-measurement
off</span></p>
<p class=""><span class=""> 77: option count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79: end-volume</span></p>
<p class=""><span class=""> 80: </span></p>
<p class=""><span class=""> 81: volume meta-autoload</span></p>
<p class=""><span class=""> 82: type meta</span></p>
<p class=""><span class=""> 83: subvolumes vmstore</span></p>
<p class=""><span class=""> 84: end-volume</span></p>
<p class=""><span class=""> 85: </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573395] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-1' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopen:</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073993] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.184700] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.200701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.203110] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.205708] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Hope this helps. </span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">thanks again</p>
<p class=""><br>
</p>
<p class="">Brett Stevens</p>
<p class=""><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 22, 2015 at 10:14 PM,
Sahina Bose <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank">sabose(a)redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class=""> <br>
<br>
<div>On 09/22/2015 02:17 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi. First time on the lists. I've
searched for this but no luck so sorry if this has
been covered before.
<div><br>
</div>
<div>Im working with the latest 3.6 beta with the
following infrastructure. </div>
<div><br>
</div>
<div>1 management host (to be used for a number of
tasks so chose not to use self hosted, we are a
school and will need to keep an eye on hardware
costs)</div>
<div>2 compute nodes</div>
<div>2 gluster nodes</div>
<div><br>
</div>
<div>so far built one gluster volume using the
gluster cli to give me 2 nodes and one arbiter
node (management host)</div>
<div><br>
</div>
<div>so far, every time I create a volume, it shows
up strait away on the ovirt gui. however no matter
what I try, I cannot create or import it as a data
domain. </div>
<div><br>
</div>
<div>the current error in the ovirt gui is "Error
while executing action AddGlusterFsStorageDomain:
Error creating a storage domain's metadata"</div>
</div>
</blockquote>
<br>
</span> Please provide vdsm and gluster logs<span class=""><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs, continuously rolling the following errors
around</div>
<div>
<p><span>Scheduler_Worker-53) [] START,
GlusterVolumesListVDSCommand(HostName =
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 24198fbf</span></p>
<p><span>2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage01:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the hostname provided in ovirt engine for
<span>sjcstorage01 ? Does this host have multiple nics?<br>
<br>
Could you provide output of gluster volume info?<br>
Please note, that these errors are not related to error
in creating storage domain. However, these errors could
prevent you from monitoring the state of gluster volume
from oVirt<br>
<br>
</span>
<blockquote type="cite"><span class="">
<div dir="ltr">
<div>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage02:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' -
server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH,
GlusterVolumesListVDSCommand, return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm new to ovirt and gluster, so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span class="">
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------090502070204020205010802--
1
0
Hi list!
After a "war-week" I finally got a systemd-script to put the host in "maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT work...
I see in the Web console the host will "Preparing for maintenance" and the VM will start the migration, then the host is in "maintenance" and a couple of seconds later the VM will be killed on the other host...
In the Log of the engine I see
1
0
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I have tried several times to import a VMware VM from a vcenter installati=
on but it just seems to hang when I hit the "load" button. I have had a l=
ook at the vdsm.log but there is so much going on (presumably due to debug=
) it is hard to distinguish what is happening. Does anyone have any pointe=
rs on how to work out what is going on?
Best regards
Ian Fraser
________________________________
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us of incorrect receipt and destroy this=
message and any attachments.
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other than the named recipient or sender.
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems.=20=
You should make your own checks and ASM (UK) Ltd does not accept liability=
in respect of viruses or computer problems experienced.
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ
Registered in England No.2053849
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-mic=
rosoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"ht=
tp://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii=
">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09margin-bottom:.0001pt;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
=09{mso-style-priority:99;
=09color:#0563C1;
=09text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
=09{mso-style-priority:99;
=09color:#954F72;
=09text-decoration:underline;}
span.EmailStyle17
=09{mso-style-type:personal-compose;
=09font-family:"Calibri",sans-serif;
=09color:windowtext;}
.MsoChpDefault
=09{mso-style-type:export-only;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
=09{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-GB" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I have tried several times to import a VMware VM fr=
om a vcenter installation but it just seems to hang when I hit the “=
load” button. I have had a look at the vdsm.log but there is s=
o much going on (presumably due to debug) it is hard to
distinguish what is happening. Does anyone have any pointers on how to wo=
rk out what is going on?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB">Best regards<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"color:#17365D;mso-fareast-languag=
e:EN-GB">Ian Fraser</span></b><span style=3D"color:#17365D;mso-fareast-lan=
guage:EN-GB"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us
of incorrect receipt and destroy this message and any attachments.<br>
<br>
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other=20than the named recipient or sender.<=
br>
<br>
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems. Yo=
u should make your own checks and ASM (UK) Ltd does not accept liability i=
n respect of viruses or computer problems
experienced.<br>
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ<br>
Registered in England No.2053849<br>
</font>
<br clear=3D"both">
______________________________________________________________________<BR>=
This email has been scanned by the Symantec Email Security.cloud service.<=
BR>
For more information please visit http://www.symanteccloud.com<BR>
______________________________________________________________________<BR>=
</body>
</html>
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_--
2
5
------=_Part_327_28715484.1442828138449
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear users,
I'm currently looking for a VDI solution. One of the requirements is to use OSS as most as possible and I think oVirt could do the job.
However, we would like to build this VDI infrastructure on the top of a Ceph storage with RBD.
I know KVM/Qemu supports rbd protocol but I cannot find information about how to use it with oVirt. It seems I only can use NFS and GlusterFS.
There is any way to use Ceph RDB devices attached to our virtual machines ?
Thank you for advance
Best regards,
------=_Part_327_28715484.1442828138449
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: trebuchet ms,sans-serif; font-size: 12pt; color: #000000'>Dear users,<br><br>I'm currently looking for a VDI solution. One of the requirements is to use OSS as most as possible and I think oVirt could do the job.<br>However, we would like to build this VDI infrastructure on the top of a Ceph storage with RBD.<br><br>I know KVM/Qemu supports rbd protocol but I cannot find information about how to use it with oVirt. It seems I only can use NFS and GlusterFS.<br><br>There is any way to use Ceph RDB devices attached to our virtual machines ?<br><br>Thank you for advance<br><br>Best regards,<br><br></div></body></html>
------=_Part_327_28715484.1442828138449--
2
3
I have a simple setup. One machine is a node. I installed 3.6 sixth beta
release node iso on a machine. IP adress is setup. Then I try to deploy
hosted engine over ssh. I input a local http url of the centos 7 iso and
hit Deploy.
The tui stops and I get:
login as: admin
admin(a)192.168.100.70's password:
Last login: Tue Sep 22 15:25:43 2015
An error appeared in the UI: AttributeError("'TransactionProgressDialog'
object has no attribute 'event'",)
Press ENTER to logout ...
or enter 's' to drop to shell
The iso file does exist and is working.
1
0
This is a multi-part message in MIME format.
--------------030305080206090500040208
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi Chris,
Replies inline..
On 09/22/2015 09:31 AM, Sahina Bose wrote:
>
>
>
> -------- Forwarded Message --------
> Subject: Re: [ovirt-users] urgent issue
> Date: Wed, 9 Sep 2015 08:31:07 -0700
> From: Chris Liebman <chris.l(a)taboola.com>
> To: users <users(a)ovirt.org>
>
>
>
> Ok - I think I'm going to switch to local storage - I've had way to
> many unexplainable issue with glusterfs  :-(. Is there any reason I
> cant add local storage to the existing shared-storage cluster? I see
> that the menu item is greyed out....
>
>
What version of gluster and ovirt are you using?
>
>
>
> On Tue, Sep 8, 2015 at 4:19 PM, Chris Liebman <chris.l(a)taboola.com
> <mailto:chris.l@taboola.com>> wrote:
>
> Its possible that this is specific to just one gluster volume...Â
> I've moved a few VM disks off of that volume and am able to start
> them fine. My recolection is that any VM started on the "bad"
> volume causes it to be disconnected and forces the ovirt node to
> be marked down until Maint->Activate.
>
> On Tue, Sep 8, 2015 at 3:52 PM, Chris Liebman
> <chris.l(a)taboola.com> wrote:
>
> In attempting to put an ovirt cluster in production I'm
> running into some off errors with gluster it looks like. Its
> 12 hosts each with one brick in distributed-replicate.
> Â (actually 2 bricks but they are separate volumes)
>
These 12 nodes in dist-rep config, are they in replica 2 or replica 3?
The latter is what is recommended for VM use-cases. Could you give the
output of `gluster volume info` ?
>
> [root@ovirt-node268 glusterfs]# rpm -qa | grep vdsm
>
> vdsm-jsonrpc-4.16.20-0.el6.noarch
>
> vdsm-gluster-4.16.20-0.el6.noarch
>
> vdsm-xmlrpc-4.16.20-0.el6.noarch
>
> vdsm-yajsonrpc-4.16.20-0.el6.noarch
>
> vdsm-4.16.20-0.el6.x86_64
>
> vdsm-python-zombiereaper-4.16.20-0.el6.noarch
>
> vdsm-python-4.16.20-0.el6.noarch
>
> vdsm-cli-4.16.20-0.el6.noarch
>
>
> Â Â Everything was fine last week, however, today various
> clients in the gluster cluster seem get "client quorum not
> met" periodically - when they get this they take one of the
> bricks offline - this causes VM's to be attempted to move -
> sometimes 20 at a time. That takes a long time :-(. I've
> tried disabling automatic migration and teh VM's get paused
> when this happens - resuming gets nothing at that point as the
> volumes mount on the server hosting the VM is not connected:
>
>
> from
> rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:
>
> [2015-09-08 21:18:42.920771] W [MSGID: 108001]
> [afr-common.c:4043:afr_notify] 2-LADC-TBX-V02-replicate-2:
> Client-quorum is not met
>
When client-quorum is not met (due to network disconnects, or gluster
brick processes going down etc), gluster makes the volume read-only.
This is expected behavior and prevents split-brains. It's probably a bit
late, but do you have the gluster fuse mount logs to confirm this
indeed was the issue?
> [2015-09-08 21:18:42.931751] I
> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02
>
> [2015-09-08 21:18:42.931836] W
> [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f1bebc84a51]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x
>
> 65) [0x4059b5] ) 0-: received signum (15), shutting down
>
> [2015-09-08 21:18:42.931858] I [fuse-bridge.c:5595:fini]
> 0-fuse: Unmounting
> '/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.
>
The VM pause you saw could be because of the unmount.I understand that a
fix (https://gerrit.ovirt.org/#/c/40240/) went in for ovirt 3-.6
(vdsm-4.17) to prevent vdsm from unmounting the gluster volume when vdsm
exits/restarts.
Is it possible to run a test setup on 3.6 and see if this is still
happening?
>
> And the mount is broken at that point:
>
> [root@ovirt-node267 ~]# df
>
> *df:
> `/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
> Transport endpoint is not connected*
>
Yes because it received a SIGTERM above.
Thanks,
Ravi
>
> Filesystem       1K-blocks Â
>   Used  Available Use% Mounted on
>
> /dev/sda3Â Â Â Â Â Â
> Â Â 51475068Â Â Â 1968452Â Â Â 46885176Â Â Â 5% /
>
> tmpfs          132210244   Â
> Â Â 0Â Â 132210244Â Â Â 0% /dev/shm
>
> /dev/sda2Â Â Â Â Â Â Â Â Â 487652Â Â Â Â 32409Â Â
> Â Â 429643Â Â Â 8% /boot
>
> /dev/sda1Â Â Â Â Â Â Â Â Â 204580Â Â Â Â Â 260Â Â
> Â Â 204320Â Â Â 1% /boot/efi
>
> /dev/sda5Â Â Â Â Â Â Â 1849960960 156714056
> 1599267616Â Â Â 9% /data1
>
> /dev/sdb1Â Â Â Â Â Â Â 1902274676Â Â 18714468
> 1786923588Â Â Â 2% /data2
>
> ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01
>
> Â Â Â Â Â Â Â Â Â Â Â Â 9249804800 727008640
> 8052899712 <tel:8052899712>Â Â Â 9%
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01
>
> ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03
>
> Â Â Â Â Â Â Â Â Â Â Â Â 1849960960Â Â Â Â 73728
> 1755907968Â Â Â 1%
> /rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03
>
> The fix for that is to put the server in maintenance mode then
> activate it again. But all VM's need to be migrated or stopped
> for that to work.
>
>
> I'm not seeing any obvious network or disk errors......Â
>
> Are their configuration options I'm missing?
>
>
>
>
>
--------------030305080206090500040208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Chris,<br>
<br>
Replies inline..<br>
<br>
<div class="moz-cite-prefix">On 09/22/2015 09:31 AM, Sahina Bose
wrote:<br>
</div>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<br>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] urgent issue</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date:
</th>
<td>Wed, 9 Sep 2015 08:31:07 -0700</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From:
</th>
<td>Chris Liebman <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:chris.l@taboola.com"><chris.l(a)taboola.com></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td>users <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:users@ovirt.org"><users(a)ovirt.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Ok - I think I'm going to switch to local storage
- I've had way to many unexplainable issue with glusterfs
 :-(. Is there any reason I cant add local storage to the
existing shared-storage cluster? I see that the menu item is
greyed out....
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</blockquote>
<br>
What version of gluster and ovirt are you using? <br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div dir="ltr">
<div> </div>
<div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 4:19 PM, Chris
Liebman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:chris.l@taboola.com" target="_blank">chris.l(a)taboola.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Its possible that this is specific to just
one gluster volume... I've moved a few VM disks off of
that volume and am able to start them fine. My
recolection is that any VM started on the "bad" volume
causes it to be disconnected and forces the ovirt node
to be marked down until Maint->Activate.</div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 3:52
PM, Chris Liebman <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:chris.l@taboola.com"><a class="moz-txt-link-abbreviated" href="mailto:chris.l@taboola.com">chris.l(a)taboola.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">In attempting to put an ovirt
cluster in production I'm running into some
off errors with gluster it looks like. Its
12 hosts each with one brick in
distributed-replicate. Â (actually 2 bricks
but they are separate volumes)
<div><br>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
These 12 nodes in dist-rep config, are they in replica 2 or replica
3? The latter is what is recommended for VM use-cases. Could you
give the output of `gluster volume info` ?<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div> </div>
<div>
<p><span>[root@ovirt-node268 glusterfs]# rpm
-qa | grep vdsm</span></p>
<p><span>vdsm-jsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-gluster-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-xmlrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-yajsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-4.16.20-0.el6.x86_64</span></p>
<p><span>vdsm-python-zombiereaper-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-python-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-cli-4.16.20-0.el6.noarch</span></p>
<p><br>
</p>
<p>Â Â Everything was fine last week,
however, today various clients in the
gluster cluster seem get "client quorum
not met" periodically - when they get this
they take one of the bricks offline - this
causes VM's to be attempted to move -
sometimes 20 at a time. That takes a
long time :-(. I've tried disabling
automatic migration and teh VM's get
paused when this happens - resuming gets
nothing at that point as the volumes mount
on the server hosting the VM is not
connected:</p>
<div><br>
</div>
<div>
<p>from
rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:</p>
<p><span>[2015-09-08 21:18:42.920771] W
[MSGID: 108001]
[afr-common.c:4043:afr_notify]
2-LADC-TBX-V02-replicate-2:
Client-quorum is </span><span>not met</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
When client-quorum is not met (due to network disconnects, or
gluster brick processes going down etc), gluster makes the volume
read-only. This is expected behavior and prevents split-brains. It's
probably a bit late, but do you have the gluster fuse mount logs to
confirm this indeed was the issue?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>[2015-09-08 21:18:42.931751] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02</span></p>
<p><span>[2015-09-08 21:18:42.931836] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51)
[0x7f1bebc84a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd)
[0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x</span></p>
<p><span>65) [0x4059b5] ) 0-: received
signum (15), shutting down</span></p>
<p><span>[2015-09-08 21:18:42.931858] I
[fuse-bridge.c:5595:fini] 0-fuse:
Unmounting
'/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
The VM pause you saw could be because of the unmount.I understand
that a fix (<a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/40240/">https://gerrit.ovirt.org/#/c/40240/</a>) went in for ovirt
3-.6 (vdsm-4.17) to prevent vdsm from unmounting the gluster volume
when vdsm exits/restarts. <br>
Is it possible to run a test setup on 3.6 and see if this is still
happening?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span><br>
</span></p>
<p><span>And the mount is broken at that
point:</span></p>
</div>
<div>
<p><span>[root@ovirt-node267 ~]# df</span></p>
<p><span><font color="#ff0000"><b>df:
`/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
Transport endpoint is not
connected</b></font></span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
Yes because it received a SIGTERM above.<br>
<br>
Thanks,<br>
Ravi<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>Filesystem    Â
  1K-blocks Â
  Used  Available Use% Mounted on</span></p>
<p><span>/dev/sda3Â Â Â Â Â Â
  51475068   1968452   46885176   5%
/</span></p>
<p><span>tmpfs       Â
  132210244   Â
  0  132210244   0% /dev/shm</span></p>
<p><span>/dev/sda2Â Â Â Â Â Â Â
  487652    32409 Â
  429643   8% /boot</span></p>
<p><span>/dev/sda1Â Â Â Â Â Â Â
  204580     260 Â
  204320   1% /boot/efi</span></p>
<p><span>/dev/sda5Â Â Â Â Â
  1849960960 156714056
1599267616Â Â Â 9% /data1</span></p>
<p><span>/dev/sdb1Â Â Â Â Â
  1902274676  18714468
1786923588Â Â Â 2% /data2</span></p>
<p><span>ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01</span></p>
<p><span>Â Â Â Â Â Â Â Â Â Â
  9249804800 727008640 <a
moz-do-not-send="true"
href="tel:8052899712"
value="+18052899712" target="_blank">8052899712</a>Â Â Â 9%
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01</span></p>
<p><span>ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03</span></p>
<p><span>Â Â Â Â Â Â Â Â Â Â
  1849960960    73728
1755907968Â Â Â 1%
/rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03</span></p>
<p>The fix for that is to put the server
in maintenance mode then activate it
again. But all VM's need to be migrated
or stopped for that to work.</p>
</div>
<div><br>
</div>
<div>I'm not seeing any obvious network or
disk errors...... </div>
</div>
<div><br>
</div>
<div>Are their configuration options I'm
missing?</div>
<div><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------030305080206090500040208--
2
1
Hi. First time on the lists. I've searched for this but no luck so sorry if
this has been covered before.
Im working with the latest 3.6 beta with the following infrastructure.
1 management host (to be used for a number of tasks so chose not to use
self hosted, we are a school and will need to keep an eye on hardware costs)
2 compute nodes
2 gluster nodes
so far built one gluster volume using the gluster cli to give me 2 nodes
and one arbiter node (management host)
so far, every time I create a volume, it shows up strait away on the ovirt
gui. however no matter what I try, I cannot create or import it as a data
domain.
the current error in the ovirt gui is "Error while executing action
AddGlusterFsStorageDomain: Error creating a storage domain's metadata"
logs, continuously rolling the following errors around
Scheduler_Worker-53) [] START, GlusterVolumesListVDSCommand(HostName =
sjcstorage02, GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage01:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage02:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH, GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf
I'm new to ovirt and gluster, so any help would be great
thanks
Brett Stevens
3
2

21 Sep '15
On Mon, Sep 21, 2015 at 02:51:32PM -0400, Douglas Schilling Landgraf wrote:
> Hi Budur,
>
> On 09/21/2015 03:39 AM, Budur Nagaraju wrote:
> >Hi
> >
> >While converting vwware to ovirt getting below error ,can someone help me ?
Which version of virt-v2v?
The latest version can be found by reading the instructions here:
https://www.redhat.com/archives/libguestfs/2015-April/msg00038.html
https://www.redhat.com/archives/libguestfs/2015-April/msg00039.html
Please don't use the old (0.9) version.
> >I have given the passowd in the file " $HOME/.netrc" ,
> >
> >[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1> -o rhev -os
> >10.204.206.10:/cst/secondary --network perfmgt vm
> >virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1>: libvirt error code: 45, message:
> >authentication failed: Password request failed
>
> Have you used the below format in the .netrc?
> machine esx.example.com login root password s3cr3t
>
> Additionally, have you set 0600 as permission to .netrc?
> chmod 600 ~/.netrc
The new version of virt-v2v does not use '.netrc' at all. Instead
there is a '--password-file' option. Best to read the manual page:
http://libguestfs.org/virt-v2v.1.html
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
1
0
So when I used oVirt 3.4.x noVNC worked wonderfully. We're running
3.5.1.1-1.el6 now and when I try to connect to a VMs console via noVNC I
get this error.
[image: Screen Shot 2015-09-21 at 10.09.23 AM.png]
I've downloaded the ca.crt file and installed it but I still get an HTTPS
error when connecting to the oVirt management console. Looking at the SSL
information Chrome says the following:
"The identity of this website has been verified by
ovirtm01.sharperlending.aws.96747. No Certificate Transparency information
was supplied by the server.
The certificate chain for this website contains at least one certificate
that was signed using a deprecated signature algorithm based on SHA-1."
Is this a known issue?
Thanks,
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
2
2
Hi all,
We have o hosted-engine deployment with two hypervisors and iscsi for VMS
and NFS4 for the VM engine + ISO + Export.
Yesterday we did an update from ovirt 3.5.3 to 3.5.4 along with OS updates
for the hypervisors and the VM engine.
After that we are unable to clone VMs, the task does not finish.
We have this in vdsmd log
Thread-38::DEBUG::2015-09-21
11:50:42,374::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-66::DEBUG::2015-09-21
11:50:43,721::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-67::DEBUG::2015-09-21
11:50:44,189::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
vdsm-python-4.16.26-0.el7.centos.noarch
vdsm-4.16.26-0.el7.centos.x86_64
vdsm-xmlrpc-4.16.26-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.26-0.el7.centos.noarch
vdsm-jsonrpc-4.16.26-0.el7.centos.noarch
vdsm-cli-4.16.26-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.26-0.el7.centos.noarch
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-1.2.8-16.el7_1.4.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.4.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.4.x86_64
libvirt-client-1.2.8-16.el7_1.4.x86_64
Thanks in advance
2
1
HI
Can you pls provide me the info on how to import a vmware ova format to
ovirt format ?
Thanks,
Nagaraju
2
2
Hi
While converting vwware to ovirt getting below error ,can someone help me ?
I have given the passowd in the file " $HOME/.netrc" ,
[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1 -o rhev -os
10.204.206.10:/cst/secondary --network perfmgt vm
virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1: libvirt
error code: 45, message: authentication failed: Password request failed
Thanks,
Nagaraju
1
0
The oVirt team is pleased to announce that today oVirt moved to its own
classification within our Bugzilla system as previously anticipated [1].
No longer limited as a set of sub-projects, each building block
(sub-project) of oVirt will be a Bugzilla product.
This will allow tracking of package versions and target releases based on
their own versioning schema.
Each maintainer, for example, will have administrative rights on his or her
Bugzilla sub-project and will be able to change flags,
versions, targets, and components.
As part of the improvements of the Bugzilla tracking system, a flag system
has been added to the oVirt product in order to ease its management [2].
The changes will go into affect in stages, please review the wiki for more
details.
We invite you to review the new tracking system and get involved with oVirt
QA [3] to make oVirt better than ever!
[1] http://community.redhat.com/blog/2015/06/moving-focus-to-the-upstream/
[2] http://www.ovirt.org/Bugzilla_rework
[3] http://www.ovirt.org/OVirt_Quality_Assurance
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
2
1
Hi,
I write currently a little backup tool in Python which use the following
workflow:
- create a snapshot -> works
- clone snapshot into VM -> help needed
- delete the snapshot -> works
- export VM to NFS share -> works
- delete cloned VM -> TODO
Is it possible to clone a snapshot into a VM like from the web-interface?
The above workflow is a little bit resource expensive but it will when
it is finished make Online-Full-backups of VM's.
cheers
gregor
3
5
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)
>From the WebUI I have the following possibilites:
1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a migration. No selectable storage domain
although we have 2 NFS systems. Gives warning hints about
"you are doing live migration, bla bla, ..."
2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed out
3) BUT! Disks tab -> Move: Works! No hints about "live migration"
I do not dare to click go ...
While 1/2 might be consistent they do not match to 3. Maybe someone
can give a hint what should work, what not and where me might have
an error.
Thanks.
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" id=3D"owaParaStyle"></style>
</head>
<body fpstyle=3D"1" ocsi=3D"0">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hi,
<div><br>
</div>
<div>somehow I got lost about the possibility to do a live storage migratio=
n.</div>
<div>We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)<=
/div>
<div><br>
</div>
<div>From the WebUI I have the following possibilites:</div>
<div><br>
</div>
<div>1) disk without snapshot: VMs tab -> Disks -> Move: Button is ac=
tive </div>
<div>but it <span style=3D"font-size: 10pt;">does not allow to do a mi=
gration. No selectable storage domain </span></div>
<div><span style=3D"font-size: 10pt;">although </span><span style=3D"f=
ont-size: 10pt;">we have 2 NFS systems. Gives warning hints about </sp=
an></div>
<div><span style=3D"font-size: 10pt;">"you are doing live migration, b=
la bla, ..."</span></div>
<div><br>
</div>
<div>2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed ou=
t</div>
<div><br>
</div>
<div>3) BUT! Disks tab -> Move: Works! No hints about "live migrati=
on"</div>
<div>I do not dare to click go ...</div>
<div><br>
</div>
<div>While 1/2 might be consistent they do not match to 3. Maybe someone</d=
iv>
<div>can give a hint what should work, what not and where me might have</di=
v>
<div>an error.</div>
<div><br>
</div>
<div>Thanks.</div>
<div><br>
</div>
<div>Markus</div>
<div><br>
</div>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_--
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750--
2
2
HI
Required help in installaling guest tool ,can some one help me in getting
the ISO image ?
Thanks,
Nagaraju
2
1