Recommended for production environment
by lucas castro
What's the OS recommended to use in production?
I make all my tests on fedora, but centos is stable, fedora update.
choosing centos or fedora, what changes, some important feature?
--
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastroborges(a)hotmail.com
10 years, 5 months
logs and ovirt
by Nathanaël Blanchet
This is a multi-part message in MIME format.
--------------090107000107080707060909
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Hi,
Several questions today about logs.
* what is ovirt-log-collector for?
* sometimes, I can't find the output of the event tab into
/var/log/ovirt-engine. where can I find the whole content of the
event tab?
* is there a way for collecting both engine and host vdsm logs to a
syslog server?
Thanks for helping.
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Perofesseur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
--------------090107000107080707060909
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font size="-1">Hi,<br>
<br>
Several questions today about logs.<br>
</font>
<ul>
<li><font size="-1">what is ovirt-log-collector for?</font></li>
<li><font size="-1">sometimes, I can't find the output of the
event tab into /var/log/ovirt-engine. where can I find the
whole content of the event tab?</font></li>
<li><font size="-1">is there a way for collecting both engine and
host vdsm logs to a syslog server?</font></li>
</ul>
<p><font size="-1">Thanks for helping.</font><br>
</p>
<pre class="moz-signature" cols="72">--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Perofesseur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet(a)abes.fr</a> </pre>
</body>
</html>
--------------090107000107080707060909--
10 years, 5 months
Kvm to ovirt
by Demeter Tibor
------=_Part_21184302_1124094817.1403630113502
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I have two of centos KVM servers and there are a lot of VMs.
I would like switch to ovirt.
I know, I need to make an import domain where I need to store the VM-s before importing to an ovirt domain. I need to make an import domain in any case?
My problem is the long convertation process and the necessary hardware. I don't have long time window for convertation, and I don't have a lot of free space for import domain server.
So, can I convert vms _directly_ to ovirt?
Is there any good solution?
Thanks in advance.
Tibor
------=_Part_21184302_1124094817.1403630113502
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>I h=
ave two of centos KVM servers and there are a lot of VMs. </div><div>I=
would like switch to ovirt.</div><div><br></div><div><br></div><div>I know=
, I need to make an import domain where I need to store the VM-s before imp=
orting to an ovirt domain. <span style=3D"font-size: 12pt;">I need to =
make an import domain in any case?</span></div><div><span style=3D"font-siz=
e: 12pt;">My problem is the long convertation process and the necessary har=
dware. I don't have long time window for convertation, and I don't have a l=
ot of free space for import domain server. </span></div><div><span sty=
le=3D"font-size: 12pt;"><br></span></div><div><span style=3D"font-size: 12p=
t;">So, can I convert vms _directly_ to ovirt? </span></div><div><span=
style=3D"font-size: 12pt;"><br></span></div><div><span style=3D"font-size:=
12pt;">Is there any good solution?</span></div><div><span style=3D"font-si=
ze: 12pt;"><br></span></div><div><span style=3D"font-size: 12pt;">Thanks in=
advance.</span></div><div>Tibor</div><div><br></div></div></body></html>
------=_Part_21184302_1124094817.1403630113502--
10 years, 5 months
Mirrored Storage domain
by Koen Vanoppen
Hi All,
Yes I have 2 problems :-).
We wanted to make our production environment more HA with adding a mirrored
disk in ovirt.
This is the result in the engine.log when I add it (it fails btw with
"Error while executing action New SAN Storage Domain: Network error during
communication with the Host.")
This is from the engine.log:
2014-06-24 10:23:26,580 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-41) There is no host with more than 10
running guests, no balancing is needed
2014-06-24 10:23:26,580 INFO
[org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl]
(DefaultQuartzScheduler_Worker-41) There is no over-utilized host in
cluster SandyBridgeCluster
2014-06-24 10:23:26,662 WARN
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-41) There is no host with less than 14
running guests
2014-06-24 10:23:26,662 WARN
[org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl]
(DefaultQuartzScheduler_Worker-41) All hosts are over-utilized, cant
balance the cluster testdev
2014-06-24 10:23:26,712 INFO
[org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl]
(DefaultQuartzScheduler_Worker-41) There is no over-utilized host in
cluster DmzCluster
2014-06-24 10:23:47,758 INFO
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Running command:
AddSANStorageDomainCommand internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2014-06-24 10:23:47,778 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] START, CreateVGVDSCommand(HostName =
saturnus1, HostId = 1180a1f6-635e-47f6-bba1-871d8c432de0,
storageDomainId=ac231cb2-d2b6-4db2-8173-f26e82970c52,
deviceList=[360050768018087c9180000000000000b], force=true), log id:
13edf224
2014-06-24 10:23:48,269 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] FINISH, CreateVGVDSCommand, return:
KoxrIK-qHuC-Kba8-jxFx-Ngb9-Pnzo-mAp04N, log id: 13edf224
2014-06-24 10:23:48,319 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] START,
CreateStorageDomainVDSCommand(HostName = saturnus1, HostId =
1180a1f6-635e-47f6-bba1-871d8c432de0,
storageDomain=org.ovirt.engine.core.common.businessentities.StorageDomainStatic@e49646bc,
args=KoxrIK-qHuC-Kba8-jxFx-Ngb9-Pnzo-mAp04N), log id: 75ec9ee1
2014-06-24 10:24:00,501 WARN
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil]
(org.ovirt.thread.pool-6-thread-43) Executing a command:
java.util.concurrent.FutureTask , but note that there are 1 tasks in the
queue.
2014-06-24 10:24:16,397 WARN
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil]
(org.ovirt.thread.pool-6-thread-9) Executing a command:
java.util.concurrent.FutureTask , but note that there are 1 tasks in the
queue.
2014-06-24 10:24:18,323 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Command
CreateStorageDomainVDSCommand(HostName = saturnus1, HostId =
1180a1f6-635e-47f6-bba1-871d8c432de0,
storageDomain=org.ovirt.engine.core.common.businessentities.StorageDomainStatic@e49646bc,
args=KoxrIK-qHuC-Kba8-jxFx-Ngb9-Pnzo-mAp04N) execution failed. Exception:
VDSNetworkException: java.util.concurrent.TimeoutException
2014-06-24 10:24:18,324 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] FINISH, CreateStorageDomainVDSCommand,
log id: 75ec9ee1
2014-06-24 10:24:18,326 ERROR
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Command
org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR
and code 5022)
2014-06-24 10:24:18,332 INFO
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Command
[id=957b5d70-cd6f-4a8c-a30b-b07a1e929006]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: ac231cb2-d2b6-4db2-8173-f26e82970c52.
2014-06-24 10:24:18,336 INFO
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Command
[id=957b5d70-cd6f-4a8c-a30b-b07a1e929006]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: ac231cb2-d2b6-4db2-8173-f26e82970c52.
2014-06-24 10:24:18,353 ERROR
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Transaction rolled-back for command:
org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand.
2014-06-24 10:24:18,363 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-13) [1c2a6a51] Correlation ID: 1c2a6a51, Job ID:
31721b2d-b77f-40cc-b792-169c304b58d7, Call Stack: null, Custom Event ID:
-1, Message: Failed to add Storage Domain StoragePooloVirt3-SkyLab-Mirror.
Kind regards,
Koen
10 years, 5 months
Convert vmware vm
by Massimo Mad
Hello,
I downloaded an image .vmdk from a repository and wanted to import it into
my oVirt infrastructure.
Is it possible?
If it is possible how?
Thanks,
Massimo
10 years, 5 months
Host cannot access Storage Domain.
by Nathan Llaneza
Hello All,
I have recently had the pleasure of testing oVirt 3.4.2 in our network. I
have two hypervisors running a HA hosted engine and four Gluster servers.
The four Gluster servers run two separate replicated storage domains. After
a fresh install everything seemed to work, but when I rebooted Gluster
server #4 it would not move to the UP state. After some investigation I
realized that the glusterd daemon was not set to start upon boot. My
resolution was to run "chkconfig glusterd on && service glusterd start".
Now I encountered a new error. The server keeps moving from an UP status to
DOWN status and vice versa. This seems to be the related error message in
/var/log/messages:
"Jun 24 07:59:59 CTI-SAN06 vdsm vds ERROR vdsm exception
occured#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/BindingXMLRPC.py", line 1070, in wrapper#012 res =
f(*args, **kwargs)#012 File "/usr/share/vdsm/gluster/api.py", line 54, in
wrapper#012 rv = func(*args, **kwargs)#012 File
"/usr/share/vdsm/gluster/api.py", line 240, in hostsList#012 return
{'hosts': self.svdsmProxy.glusterPeerStatus()}#012 File
"/usr/share/vdsm/supervdsm.py", line 50, in __call__#012 return
callMethod()#012 File "/usr/share/vdsm/supervdsm.py", line 48, in
<lambda>#012 **kwargs)#012 File "<string>", line 2, in
glusterPeerStatus#012 File
"/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in
_callmethod#012 raise convert_to_error(kind,
result)#012GlusterCmdExecFailedException: Command execution
failed#012error: Connection failed. Please check if gluster daemon is
operational.#012return code: 1".
The last line seems to be a dead giveaway "Please check if gluster daemon
is operational". When I run "server glusterd status" it prints "glusterd
(pid xxxxx) is running...".
So I did some Internet searches and found this bug
http://gerrit.ovirt.org/#/c/23982/, but it says that it has been fixed. So
I thought maybe my packages are out of date. Furthermore I updated my
packages using "yum -y update". All my Gluster servers are showing that
they are running Gluster 3.5.0. I am still having no success getting my
fourth Gluster server to remain in the UP state.
Any advice/help would be greatly appreciated!
10 years, 5 months
Spam Spice Error
by Maurice James
------=_Part_8335_177679856.1403547214551
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hello all,
This problem is also happening with oVirt
https://access.redhat.com/site/solutions/46950
Spice error code 1032
It happens when I try to connect to a vm with the console set to auto (USB enabled)
When I connect to another VM with USB disabled, everything works
------=_Part_8335_177679856.1403547214551
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hello all,<br data-mce-bogus="1"></div><div> This problem is also happening with oVirt<br></div><div><br></div><div><a href="https://access.redhat.com/site/solutions/46950">https://access.redhat.com/site/solutions/46950</a> <br></div><div><br></div><div>Spice error code 1032<br></div><div><br></div><div>It happens when I try to connect to a vm with the console set to auto (USB enabled)<br></div><div><br></div><div>When I connect to another VM with USB disabled, everything works<br></div></div></body></html>
------=_Part_8335_177679856.1403547214551--
10 years, 5 months
Fwd: ISO_DOMAIN issue
by Koen Vanoppen
---------- Forwarded message ----------
From: Koen Vanoppen <vanoppen.koen(a)gmail.com>
Date: 2014-06-24 11:27 GMT+02:00
Subject: Re: [ovirt-users] ISO_DOMAIN issue
To: Sven Kieske <S.Kieske(a)mittwald.de>
Ok, thanx. I manually went through all the VM's-->edit--> unchecked "attach
cd"-->ok
But he keeps giving the error. Do I need to restart something or... Is
there a way to see why the error keeps coming?
This is the error:
Thread-2364528::DEBUG::2014-
06-24 11:28:47,044::task::579::TaskManager.Task::(_updateState)
Task=`1334f4cc-8e8a-4fd3-9596-e1e500c7942a`::moving from state preparing ->
state finished
Thread-2364528::DEBUG::2014-06-24
11:28:47,046::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.476ba0d6-37f1-4b36-b20c-936d31592794': < ResourceRef
'Storage.476ba0d6-37f1-4b36-b20c-936d31592794', isValid: 'True' obj:
'None'>}
Thread-2364528::DEBUG::2014-06-24
11:28:47,047::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-2364528::DEBUG::2014-06-24
11:28:47,050::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.476ba0d6-37f1-4b36-b20c-936d31592794'
Thread-2364528::DEBUG::2014-06-24
11:28:47,053::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.476ba0d6-37f1-4b36-b20c-936d31592794' (0 active
users)
Thread-2364528::DEBUG::2014-06-24
11:28:47,054::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.476ba0d6-37f1-4b36-b20c-936d31592794' is free, finding
out if anyone is waiting for it.
Thread-2364528::DEBUG::2014-06-24
11:28:47,058::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.476ba0d6-37f1-4b36-b20c-936d31592794',
Clearing records.
Thread-2364528::DEBUG::2014-06-24
11:28:47,060::task::974::TaskManager.Task::(_decref)
Task=`1334f4cc-8e8a-4fd3-9596-e1e500c7942a`::ref 0 aborting False
Thread-2364493::WARNING::2014-06-24
11:28:47,795::fileSD::636::scanDomains::(collectMetaFiles) Metadata
collection for domain path
/rhev/data-center/mnt/vega.brusselsairport.aero:_var_lib_exports_iso
timedout
Traceback (most recent call last):
File "/usr/share/vdsm/storage/fileSD.py", line 625, in collectMetaFiles
sd.DOMAIN_META_DATA))
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in
callCrabRPCFunction
*args, **kwargs)
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in
callCrabRPCFunction
rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll
raise Timeout()
Timeout
Thread-2364493::DEBUG::2014-06-24
11:28:47,802::remoteFileHandler::260::RepoFileHelper.PoolHandler::(stop)
Pool handler existed, OUT: '' ERR: ''
Thread-23::ERROR::2014-06-24
11:28:47,809::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
50cf24a4-d1ef-4105-a9a5-b81d91339175 not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/nfsSD.py", line 132, in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'50cf24a4-d1ef-4105-a9a5-b81d91339175',)
Thread-23::ERROR::2014-06-24
11:28:47,814::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain 50cf24a4-d1ef-4105-a9a5-b81d91339175
monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 201, in
_monitorDomain
self.domain.selftest()
File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
return getattr(self.getRealDomain(), attrName)
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/nfsSD.py", line 132, in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'50cf24a4-d1ef-4105-a9a5-b81d91339175',)
Dummy-124::DEBUG::2014-06-24
11:28:47,851::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB)
copied, 0.276721 s, 3.7 MB/s\n'; <rc> = 0
Thread-21::DEBUG::2014-06-24
11:28:48,183::blockSD::595::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
iflag=direct if=/dev/6cf8c48e-fbed-4b68-b376-57eab3039878/metadata bs=4096
count=1' (cwd None)
Thread-21::DEBUG::2014-06-24
11:28:48,223::blockSD::595::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
0.000196027 s, 20.9 MB/s\n'; <rc> = 0
Dummy-124::DEBUG::2014-06-24
11:28:49,873::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
'dd
if=/rhev/data-center/476ba0d6-37f1-4b36-b20c-936d31592794/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000' (cwd None)
Dummy-124::DEBUG::2014-06-24
11:28:50,023::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB)
copied, 0.0498583 s, 20.5 MB/s\n'; <rc> = 0
Thread-22::DEBUG::2014-06-24
11:28:50,288::blockSD::595::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
iflag=direct if=/dev/dc662b6f-00d4-4b9a-a320-0f5ecf5da45e/metadata bs=4096
count=1' (cwd None)
Thread-22::DEBUG::2014-06-24
11:28:50,357::blockSD::595::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
0.000228333 s, 17.9 MB/s\n'; <rc> = 0
2014-06-24 10:46 GMT+02:00 Sven Kieske <S.Kieske(a)mittwald.de>:
>
> Am 24.06.2014 10:27, schrieb Koen Vanoppen:
> > Hi all,
> >
> > We have a small problem with a recently removed iso_domain. The domain
> ahs
> > been removed in the ovirt but in the vdsm he keeps complaining about this
>
> Did you make sure there are no vms running with attached isos from this
> domain?
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +49-5772-293-100
> F: +49-5772-293-333
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
10 years, 5 months
can't attach storage domain to data center
by Tiemen Ruiten
Hello,
I've been struggling to set up an Ovirt cluster and am now bumping into
this problem:
When I try to create a new (Gluster) storage domain, it fails to attach
to the data center. The error on the node from vdsm.log:
Thread-13::DEBUG::2014-06-21
16:17:14,157::BindingXMLRPC::251::vds::(wrapper) client [192.168.10.119]
flowID [6e44c0a3]
Thread-13::DEBUG::2014-06-21
16:17:14,159::task::595::TaskManager.Task::(_updateState)
Task=`97b78287-45d2-4d5a-8336-460987df3840`::moving from state init ->
state preparing
Thread-13::INFO::2014-06-21
16:17:14,160::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '192.168.10.120:/vmimage', 'iqn': '', 'user': '', 'tpgt':
'1', 'vfs_type': 'glusterfs', 'password': '******', 'id':
'901b15ec-6b05-43c1-8a50-06b34c8ffdbd'}], options=None)
Thread-13::DEBUG::2014-06-21
16:17:14,172::hsm::2340::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-13::DEBUG::2014-06-21
16:17:14,185::hsm::2352::Storage.HSM::(__prefetchDomains) Found SD
uuids: ('dc661957-c0c1-44ba-a5b9-e6558904207b',)
Thread-13::DEBUG::2014-06-21
16:17:14,185::hsm::2408::Storage.HSM::(connectStorageServer) knownSDs:
{dc661957-c0c1-44ba-a5b9-e6558904207b: storage.glusterSD.findDomain}
Thread-13::INFO::2014-06-21
16:17:14,186::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id': '901b15ec-6b05-43c1-8a50-06b34c8ffdbd'}]}
Thread-13::DEBUG::2014-06-21
16:17:14,186::task::1185::TaskManager.Task::(prepare)
Task=`97b78287-45d2-4d5a-8336-460987df3840`::finished: {'statuslist':
[{'status': 0, 'id': '901b15ec-6b05-43c1-8a50-06b34c8ffdbd'}]}
Thread-13::DEBUG::2014-06-21
16:17:14,187::task::595::TaskManager.Task::(_updateState)
Task=`97b78287-45d2-4d5a-8336-460987df3840`::moving from state preparing
-> state finished
Thread-13::DEBUG::2014-06-21
16:17:14,187::resourceManager::940::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-13::DEBUG::2014-06-21
16:17:14,187::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-13::DEBUG::2014-06-21
16:17:14,188::task::990::TaskManager.Task::(_decref)
Task=`97b78287-45d2-4d5a-8336-460987df3840`::ref 0 aborting False
Thread-13::DEBUG::2014-06-21
16:17:14,195::BindingXMLRPC::251::vds::(wrapper) client [192.168.10.119]
flowID [6e44c0a3]
Thread-13::DEBUG::2014-06-21
16:17:14,195::task::595::TaskManager.Task::(_updateState)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::moving from state init ->
state preparing
Thread-13::INFO::2014-06-21
16:17:14,196::logUtils::44::dispatcher::(wrapper) Run and protect:
createStoragePool(poolType=None,
spUUID='806d2356-12cf-437c-8917-dd13ee823e36', poolName='testing',
masterDom='dc661957-c0c1-44ba-a5b9-e6558904207b',
domList=['dc661957-c0c1-44ba-a5b9-e6558904207b'], masterVersion=2,
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,
ioOpTimeoutSec=10, leaseRetries=3, options=None)
Thread-13::DEBUG::2014-06-21
16:17:14,196::misc::756::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-13::DEBUG::2014-06-21
16:17:14,197::misc::758::SamplingMethod::(__call__) Got in to sampling
method
Thread-13::DEBUG::2014-06-21
16:17:14,197::misc::756::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-13::DEBUG::2014-06-21
16:17:14,198::misc::758::SamplingMethod::(__call__) Got in to sampling
method
Thread-13::DEBUG::2014-06-21
16:17:14,198::iscsi::407::Storage.ISCSI::(rescan) Performing SCSI scan,
this will take up to 30 seconds
Thread-13::DEBUG::2014-06-21
16:17:14,199::iscsiadm::92::Storage.Misc.excCmd::(_runCmd)
'/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
Thread-13::DEBUG::2014-06-21
16:17:14,228::misc::766::SamplingMethod::(__call__) Returning last result
Thread-13::DEBUG::2014-06-21
16:17:14,229::multipath::110::Storage.Misc.excCmd::(rescan)
'/usr/bin/sudo -n /sbin/multipath -r' (cwd None)
Thread-13::DEBUG::2014-06-21
16:17:14,294::multipath::110::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-13::DEBUG::2014-06-21
16:17:14,295::lvm::497::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,295::lvm::499::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,296::lvm::508::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' got the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,296::lvm::510::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' released the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,297::lvm::528::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,297::lvm::530::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-13::DEBUG::2014-06-21
16:17:14,298::misc::766::SamplingMethod::(__call__) Returning last result
Thread-13::DEBUG::2014-06-21
16:17:14,318::fileSD::150::Storage.StorageDomain::(__init__) Reading
domain in path
/rhev/data-center/mnt/glusterSD/192.168.10.120:_vmimage/dc661957-c0c1-44ba-a5b9-e6558904207b
Thread-13::DEBUG::2014-06-21
16:17:14,322::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW backend
Thread-13::DEBUG::2014-06-21
16:17:14,328::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vmimage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=192.168.10.120:/vmimage', 'ROLE=Regular',
'SDUUID=dc661957-c0c1-44ba-a5b9-e6558904207b', 'TYPE=GLUSTERFS',
'VERSION=3', '_SHA_CKSUM=9fdc035c398d2cd8b5c31bf5eea2882c8782ed57']
Thread-13::DEBUG::2014-06-21
16:17:14,334::fileSD::609::Storage.StorageDomain::(imageGarbageCollector) Removing
remnants of deleted images []
Thread-13::INFO::2014-06-21
16:17:14,335::sd::383::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace dc661957-c0c1-44ba-a5b9-e6558904207b_imageNS already
registered
Thread-13::INFO::2014-06-21
16:17:14,335::sd::391::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace dc661957-c0c1-44ba-a5b9-e6558904207b_volumeNS already
registered
Thread-13::INFO::2014-06-21
16:17:14,336::fileSD::350::Storage.StorageDomain::(validate)
sdUUID=dc661957-c0c1-44ba-a5b9-e6558904207b
Thread-13::DEBUG::2014-06-21
16:17:14,340::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vmimage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=192.168.10.120:/vmimage', 'ROLE=Regular',
'SDUUID=dc661957-c0c1-44ba-a5b9-e6558904207b', 'TYPE=GLUSTERFS',
'VERSION=3', '_SHA_CKSUM=9fdc035c398d2cd8b5c31bf5eea2882c8782ed57']
Thread-13::DEBUG::2014-06-21
16:17:14,341::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.806d2356-12cf-437c-8917-dd13ee823e36`ReqID=`de2ede47-22fa-43b8-9f3b-dc714a45b450`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '980' at
'createStoragePool'
Thread-13::DEBUG::2014-06-21
16:17:14,342::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource
'Storage.806d2356-12cf-437c-8917-dd13ee823e36' for lock type 'exclusive'
Thread-13::DEBUG::2014-06-21
16:17:14,342::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.806d2356-12cf-437c-8917-dd13ee823e36' is free. Now
locking as 'exclusive' (1 active user)
Thread-13::DEBUG::2014-06-21
16:17:14,343::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.806d2356-12cf-437c-8917-dd13ee823e36`ReqID=`de2ede47-22fa-43b8-9f3b-dc714a45b450`::Granted
request
Thread-13::DEBUG::2014-06-21
16:17:14,343::task::827::TaskManager.Task::(resourceAcquired)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::_resourcesAcquired:
Storage.806d2356-12cf-437c-8917-dd13ee823e36 (exclusive)
Thread-13::DEBUG::2014-06-21
16:17:14,344::task::990::TaskManager.Task::(_decref)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::ref 1 aborting False
Thread-13::DEBUG::2014-06-21
16:17:14,345::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.dc661957-c0c1-44ba-a5b9-e6558904207b`ReqID=`71bf6917-b501-4016-ad8e-8b84849da8cb`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '982' at
'createStoragePool'
Thread-13::DEBUG::2014-06-21
16:17:14,345::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource
'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b' for lock type 'exclusive'
Thread-13::DEBUG::2014-06-21
16:17:14,346::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b' is free. Now
locking as 'exclusive' (1 active user)
Thread-13::DEBUG::2014-06-21
16:17:14,346::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.dc661957-c0c1-44ba-a5b9-e6558904207b`ReqID=`71bf6917-b501-4016-ad8e-8b84849da8cb`::Granted
request
Thread-13::DEBUG::2014-06-21
16:17:14,347::task::827::TaskManager.Task::(resourceAcquired)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::_resourcesAcquired:
Storage.dc661957-c0c1-44ba-a5b9-e6558904207b (exclusive)
Thread-13::DEBUG::2014-06-21
16:17:14,347::task::990::TaskManager.Task::(_decref)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::ref 1 aborting False
Thread-13::INFO::2014-06-21
16:17:14,347::sp::133::Storage.StoragePool::(setBackend) updating pool
806d2356-12cf-437c-8917-dd13ee823e36 backend from type NoneType instance
0x39e278bf00 to type StoragePoolDiskBackend instance 0x7f764c093cb0
Thread-13::INFO::2014-06-21
16:17:14,348::sp::548::Storage.StoragePool::(create)
spUUID=806d2356-12cf-437c-8917-dd13ee823e36 poolName=testing
master_sd=dc661957-c0c1-44ba-a5b9-e6558904207b
domList=['dc661957-c0c1-44ba-a5b9-e6558904207b'] masterVersion=2
{'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3,
'LOCKRENEWALINTERVALSEC': 5}
Thread-13::INFO::2014-06-21
16:17:14,348::fileSD::350::Storage.StorageDomain::(validate)
sdUUID=dc661957-c0c1-44ba-a5b9-e6558904207b
Thread-13::DEBUG::2014-06-21
16:17:14,352::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vmimage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=192.168.10.120:/vmimage', 'ROLE=Regular',
'SDUUID=dc661957-c0c1-44ba-a5b9-e6558904207b', 'TYPE=GLUSTERFS',
'VERSION=3', '_SHA_CKSUM=9fdc035c398d2cd8b5c31bf5eea2882c8782ed57']
Thread-13::DEBUG::2014-06-21
16:17:14,357::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vmimage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=192.168.10.120:/vmimage', 'ROLE=Regular',
'SDUUID=dc661957-c0c1-44ba-a5b9-e6558904207b', 'TYPE=GLUSTERFS',
'VERSION=3', '_SHA_CKSUM=9fdc035c398d2cd8b5c31bf5eea2882c8782ed57']
Thread-13::WARNING::2014-06-21
16:17:14,358::fileUtils::167::Storage.fileUtils::(createdir) Dir
/rhev/data-center/806d2356-12cf-437c-8917-dd13ee823e36 already exists
Thread-13::DEBUG::2014-06-21
16:17:14,358::persistentDict::167::Storage.PersistentDict::(transaction)
Starting transaction
Thread-13::DEBUG::2014-06-21
16:17:14,359::persistentDict::175::Storage.PersistentDict::(transaction)
Finished transaction
Thread-13::INFO::2014-06-21
16:17:14,359::clusterlock::184::SANLock::(acquireHostId) Acquiring host
id for domain dc661957-c0c1-44ba-a5b9-e6558904207b (id: 250)
Thread-24::DEBUG::2014-06-21
16:17:14,394::task::595::TaskManager.Task::(_updateState)
Task=`c4430b80-31d9-4a1d-bee8-fae01a438da6`::moving from state init ->
state preparing
Thread-24::INFO::2014-06-21
16:17:14,395::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-24::INFO::2014-06-21
16:17:14,395::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-24::DEBUG::2014-06-21
16:17:14,396::task::1185::TaskManager.Task::(prepare)
Task=`c4430b80-31d9-4a1d-bee8-fae01a438da6`::finished: {}
Thread-24::DEBUG::2014-06-21
16:17:14,396::task::595::TaskManager.Task::(_updateState)
Task=`c4430b80-31d9-4a1d-bee8-fae01a438da6`::moving from state preparing
-> state finished
Thread-24::DEBUG::2014-06-21
16:17:14,396::resourceManager::940::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-24::DEBUG::2014-06-21
16:17:14,396::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-24::DEBUG::2014-06-21
16:17:14,397::task::990::TaskManager.Task::(_decref)
Task=`c4430b80-31d9-4a1d-bee8-fae01a438da6`::ref 0 aborting False
Thread-13::ERROR::2014-06-21
16:17:15,361::task::866::TaskManager.Task::(_setError)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 988, in createStoragePool
leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 573, in create
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 515, in
_acquireTemporaryClusterLock
msd.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 467, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 199, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id:
('dc661957-c0c1-44ba-a5b9-e6558904207b', SanlockException(90, 'Sanlock
lockspace add failure', 'Message too long'))
Thread-13::DEBUG::2014-06-21
16:17:15,363::task::885::TaskManager.Task::(_run)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::Task._run:
d815e5e5-0202-4137-94be-21dc5e2b61c9 (None,
'806d2356-12cf-437c-8917-dd13ee823e36', 'testing',
'dc661957-c0c1-44ba-a5b9-e6558904207b',
['dc661957-c0c1-44ba-a5b9-e6558904207b'], 2, None, 5, 60, 10, 3) {}
failed - stopping task
Thread-13::DEBUG::2014-06-21
16:17:15,364::task::1211::TaskManager.Task::(stop)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::stopping in state preparing
(force False)
Thread-13::DEBUG::2014-06-21
16:17:15,364::task::990::TaskManager.Task::(_decref)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::ref 1 aborting True
Thread-13::INFO::2014-06-21
16:17:15,365::task::1168::TaskManager.Task::(prepare)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::aborting: Task is aborted:
'Cannot acquire host id' - code 661
Thread-13::DEBUG::2014-06-21
16:17:15,365::task::1173::TaskManager.Task::(prepare)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::Prepare: aborted: Cannot
acquire host id
Thread-13::DEBUG::2014-06-21
16:17:15,365::task::990::TaskManager.Task::(_decref)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::ref 0 aborting True
Thread-13::DEBUG::2014-06-21
16:17:15,366::task::925::TaskManager.Task::(_doAbort)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::Task._doAbort: force False
Thread-13::DEBUG::2014-06-21
16:17:15,366::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-13::DEBUG::2014-06-21
16:17:15,366::task::595::TaskManager.Task::(_updateState)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::moving from state preparing
-> state aborting
Thread-13::DEBUG::2014-06-21
16:17:15,366::task::550::TaskManager.Task::(__state_aborting)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::_aborting: recover policy none
Thread-13::DEBUG::2014-06-21
16:17:15,367::task::595::TaskManager.Task::(_updateState)
Task=`d815e5e5-0202-4137-94be-21dc5e2b61c9`::moving from state aborting
-> state failed
Thread-13::DEBUG::2014-06-21
16:17:15,367::resourceManager::940::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b': < ResourceRef
'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b', isValid: 'True' obj:
'None'>, 'Storage.806d2356-12cf-437c-8917-dd13ee823e36': < ResourceRef
'Storage.806d2356-12cf-437c-8917-dd13ee823e36', isValid: 'True' obj:
'None'>}
Thread-13::DEBUG::2014-06-21
16:17:15,367::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-13::DEBUG::2014-06-21
16:17:15,368::resourceManager::616::ResourceManager::(releaseResource)
Trying to release resource 'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b'
Thread-13::DEBUG::2014-06-21
16:17:15,369::resourceManager::635::ResourceManager::(releaseResource)
Released resource 'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b' (0
active users)
Thread-13::DEBUG::2014-06-21
16:17:15,369::resourceManager::641::ResourceManager::(releaseResource)
Resource 'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b' is free, finding
out if anyone is waiting for it.
Thread-13::DEBUG::2014-06-21
16:17:15,369::resourceManager::649::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.dc661957-c0c1-44ba-a5b9-e6558904207b', Clearing records.
Thread-13::DEBUG::2014-06-21
16:17:15,370::resourceManager::616::ResourceManager::(releaseResource)
Trying to release resource 'Storage.806d2356-12cf-437c-8917-dd13ee823e36'
Thread-13::DEBUG::2014-06-21
16:17:15,370::resourceManager::635::ResourceManager::(releaseResource)
Released resource 'Storage.806d2356-12cf-437c-8917-dd13ee823e36' (0
active users)
Thread-13::DEBUG::2014-06-21
16:17:15,370::resourceManager::641::ResourceManager::(releaseResource)
Resource 'Storage.806d2356-12cf-437c-8917-dd13ee823e36' is free, finding
out if anyone is waiting for it.
Thread-13::DEBUG::2014-06-21
16:17:15,371::resourceManager::649::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.806d2356-12cf-437c-8917-dd13ee823e36', Clearing records.
Thread-13::ERROR::2014-06-21
16:17:15,371::dispatcher::65::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Cannot acquire host id:
('dc661957-c0c1-44ba-a5b9-e6558904207b', SanlockException(90, 'Sanlock
lockspace add failure', 'Message too long'))", 'code': 661}}
My oVirt version: 3.4.2-1.el6 (CentOS 6.5)
The hypervisor hosts run GlusterFS 3.5.0-3.fc19.(Fedora 19)
The two storage servers run GlusterFS 3.5.0-2.el6 (Centos 6.5)
So I am NOT using local storage of the hypervisor hosts for the
GlusterFS bricks.
What can I do to solve this error?
--
Tiemen Ruiten
Systems Engineer
R&D Media
10 years, 5 months