iptables management
by Chris Adams
During setup, I allowed the script to change iptables rules. Is this
necessary? Also, is it an "active" management (where oVirt will make
changes), or just a one-time thing?
I ask because I have some other iptables setup I want (such as limited
SSH access), and I don't want to make changes to iptables that oVirt
will override later or anything like that.
--
Chris Adams <cma(a)cmadams.net>
10 years
Re: [ovirt-users] Fake power management?
by mots
------=_Part_31_861888835.1416222010953
Content-Type: multipart/alternative;
boundary="=_cjtqlQp4QSYnJhuKs0Xg4VLTs9O15oco497qtVs9HKP3duB5"
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_cjtqlQp4QSYnJhuKs0Xg4VLTs9O15oco497qtVs9HKP3duB5
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Yes, pacemaker manages the engine. That part is working fine, the engine =
restarts on the remaining node without problems.=20=0D=0AIt's just that t=
he guests don't come back up until the powered down node has been fenced =
manually.=0D=0A=0D=0A-----Urspr=C3=BCngliche Nachricht-----=0D=0A> Von:Ba=
rak Azulay <bazulay(a)redhat.com <mailto:bazulay@redhat.com> >=0D=0A> Gesen=
det: Mon 17 November 2014 11:35=0D=0A> An: Patrick Lottenbach <pl(a)a-bot.c=
h <mailto:pl@a-bot.ch> >=0D=0A> CC: users(a)ovirt.org <mailto:users@ovirt.o=
rg>=20=0D=0A> Betreff: Re: [ovirt-users] Fake power management=3F=0D=0A>=20=
=0D=0A>=20=0D=0A>=20=0D=0A> ----- Original Message -----=0D=0A> > From: "=
mots" <mots(a)nepu.moe <mailto:mots@nepu.moe> >=0D=0A> > To: users(a)ovirt.or=
g <mailto:users@ovirt.org>=20=0D=0A> > Sent: Friday, November 14, 2014 4:=
54:08 PM=0D=0A> > Subject: [ovirt-users] Fake power management=3F=0D=0A> =
>=20=0D=0A> > Fake power management=3F Hello,=0D=0A> >=20=0D=0A> > I'm bu=
ilding a small demonstration system for our sales team to take to a=0D=0A=
> > customer so that they can show them our solutions.=0D=0A> > Hardware:=
Two Intel NUC's, a 4 port switch and a laptop.=0D=0A> > Engine: Runs as =
a VM on one of the NUCs, which one it runs on is determined=0D=0A> > by p=
acemaker.=0D=0A> > Storage: Also managed by pacemaker, it's drbd backed a=
nd accessed with iscsi.=0D=0A> > oVirt version: 3.5=0D=0A> > OS: CentOS 6=
=2E6=0D=0A> >=20=0D=0A> > The idea is to have our sales representative (o=
r the potential customer=0D=0A> > himself) randomly pull the plug on one =
of the NUCs to show that the system=0D=0A> > stays operational when part =
of the hardware fails.=0D=0A>=20=0D=0A> I assume you are aware that the e=
ngine might fence the node it is running on ...=20=0D=0A> Or do you use p=
acemaker to run the engine as well =3F=0D=0A>=20=0D=0A> > My problem is t=
hat I don't have any way to implement power management, so the=0D=0A> > E=
ngine can't fence nodes and won't restart guests that were running on the=
=0D=0A> > node which lost power. In pacemaker I can just configure fencin=
g over SSH or=0D=0A> > even disable the requirement to do so completely. =
Is there something similar=0D=0A> > for oVirt, so that the Engine will co=
nsider a node which it can't connect to=0D=0A> > to be powered down=3F=0D=
=0A> >=20=0D=0A> > Regards,=0D=0A> >=20=0D=0A> > mots=0D=0A> >=20=0D=0A> =
> _______________________________________________=0D=0A> > Users mailing =
list=0D=0A> > Users(a)ovirt.org <mailto:Users@ovirt.org>=20=0D=0A> > http:/=
/lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/l=
istinfo/users>=20=0D=0A> >=20=0D=0A>=20=0D=0A=0D=0A
--=_cjtqlQp4QSYnJhuKs0Xg4VLTs9O15oco497qtVs9HKP3duB5
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://ww=
w.w3.org/TR/html4/loose.dtd"><html>=0A<head>=0A <meta name=3D"Generator"=
content=3D"Zarafa WebApp v7.1.10-44973">=0A <meta http-equiv=3D"Content=
-Type" content=3D"text/html; charset=3Dutf-8">=0A <title>AW: [ovirt-user=
s] Fake power management=3F</title>=0A</head>=0A<body>=0A<font face=3D"ta=
homa" size=3D"2">Yes, pacemaker manages the engine. That part is working =
fine, the engine restarts on the remaining node without problems. <br>It'=
s just that the guests don't come back up until the powered down node has=
been fenced manually.</font><br><pre style=3D"white-space: -moz-pre-wrap=
; white-space: -pre-wrap; white-space: -o-pre-wrap; white-space: pre-wrap=
; word-wrap: break-word;" wrap=3D"">-----Urspr=C3=BCngliche Nachricht----=
-<br>> Von:Barak Azulay <<a href=3D"mailto:bazulay@redhat.com">bazu=
lay(a)redhat.com</a>><br>> Gesendet: Mon 17 November 2014 11:35<br>&g=
t; An: Patrick Lottenbach <<a href=3D"mailto:pl@a-bot.ch">pl(a)a-bot.ch<=
/a>><br>> CC: <a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a=
><br>> Betreff: Re: [ovirt-users] Fake power management=3F<br>> <br=
>> <br>> <br>> ----- Original Message -----<br>> > From: "=
mots" <<a href=3D"mailto:mots@nepu.moe">mots(a)nepu.moe</a>><br>> =
> To: <a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a><br>> &=
gt; Sent: Friday, November 14, 2014 4:54:08 PM<br>> > Subject: [ovi=
rt-users] Fake power management=3F<br>> > <br>> > Fake power =
management=3F Hello,<br>> > <br>> > I'm building a small demo=
nstration system for our sales team to take to a<br>> > customer so=
that they can show them our solutions.<br>> > Hardware: Two Intel =
NUC's, a 4 port switch and a laptop.<br>> > Engine: Runs as a VM on=
one of the NUCs, which one it runs on is determined<br>> > by pace=
maker.<br>> > Storage: Also managed by pacemaker, it's drbd backed =
and accessed with iscsi.<br>> > oVirt version: 3.5<br>> > OS:=
CentOS 6.6<br>> > <br>> > The idea is to have our sales repr=
esentative (or the potential customer<br>> > himself) randomly pull=
the plug on one of the NUCs to show that the system<br>> > stays o=
perational when part of the hardware fails.<br>> <br>> I assume you=
are aware that the engine might fence the node it is running on ... <br>=
> Or do you use pacemaker to run the engine as well =3F<br>> <br>&g=
t; > My problem is that I don't have any way to implement power manage=
ment, so the<br>> > Engine can't fence nodes and won't restart gues=
ts that were running on the<br>> > node which lost power. In pacema=
ker I can just configure fencing over SSH or<br>> > even disable th=
e requirement to do so completely. Is there something similar<br>> >=
; for oVirt, so that the Engine will consider a node which it can't conne=
ct to<br>> > to be powered down=3F<br>> > <br>> > Regar=
ds,<br>> > <br>> > mots<br>> > <br>> > __________=
_____________________________________<br>> > Users mailing list<br>=
> > <a href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>> =
> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"=
_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>> > <b=
r>> </pre>=0A</body>=0A</html>
--=_cjtqlQp4QSYnJhuKs0Xg4VLTs9O15oco497qtVs9HKP3duB5--
------=_Part_31_861888835.1416222010953
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.8.6-4)
iQEcBAEBCAAGBQJUadU6AAoJELfzdzVzTtoKVGUH/2KQF6ZTNH7DpnXmG7K4wo9c
nB924coImSQfGkFJjvIisRSG6vTnwk+z0uFjptulLl4o3NoPaDLs0IlvjpSvyGwf
zFXLbOOls/SBVKOY5Nx7iAjEwIH4DXe06v2dYwJngQGnuyQv7pMDNUAz4uMXCup8
QjV39XbOYXQoLIfHUcGOJB824f2+LgRTrDr4HdZcyE0SHUfJaque3jdCDveXZFkB
d0c0RKgB/r8wYBz+T7zIfutxtvKHYp1P5WmpbQ1J1Fr8XZKWgC7l3f7ubJyeldvK
wyqZqokR+s1yptnUfLJUJEthnqFK0l+Fq2Vm9vuVpZsvwcn5QgQg+Kn1hmWCT7A=
=qpmj
-----END PGP SIGNATURE-----
------=_Part_31_861888835.1416222010953--
10 years
ovirt local datacenter/cluster using zfs local storage for storage domain?
by Mathew Gancarz
--_000_EE5DB373DE433D4A87A60EE0284C92F926421EFDRISXMBX03adsuhn_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hello all,
I'm exploring using oVirt 3.5 as the management engine for a new cluster I =
am building. I don't really need high availability and I have 3 servers wit=
h fast SSDs on local storage I'd like to use, preferably using ZFS to ensur=
e reliability of the storage.
I set up a brand new CentOS 6.6 minimal install and have been able to get o=
Virt up using the all-in-one plugin. I can use local storage (the default E=
XT4 LVM filesystem CentOS sets up) to setup a local data center for each of=
the servers but have run into issues when I try to provision a ZFS filesys=
tem as a local storage domain. I'm using ZFS on Linux 0.6.3.
I first had multipath.conf issues, which prevented me from even setting up =
a zpool using the local disks. After blacklisting the local disks in /etc/m=
ultipath.conf, I was able to get zfs up and running and create the local do=
mains, but I get errors when I try to create a storage Domain of type: Data=
/ Local on Host and point at the ZFS path (/vmstore/isos) (PS: I'm not try=
ing to create an ISO domain, it's just a directory name isos)
The error message that comes up is "Error while executing action New Local =
Storage Domain: Storage Domain target is unsupported".
Has anyone tried this before? I am able to set up an NFS export of the ZFS =
folder as a Storage Domain (using directions here: http://virt.guru/2014/02=
/25/installing-ovirt-with-shared-local-storage/) but if possible, I'd like =
to skip the NFS layer if I can and just go directly to the hardware.
The supervdsm.log shows:
MainProcess|Thread-1734::DEBUG::2014-11-17 15:50:31,350::supervdsmServer::1=
01::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', (=
'qemu', 'kvm'), u'/vmstore/isos', 5) {}
MainProcess|Thread-1734::DEBUG::2014-11-17 15:50:31,356::supervdsmServer::1=
08::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None
MainProcess|Thread-1735::DEBUG::2014-11-17 15:50:31,436::supervdsmServer::1=
01::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', (=
'qemu', 'kvm'), u'/vmstore/isos', 5) {}
MainProcess|Thread-1735::DEBUG::2014-11-17 15:50:31,441::supervdsmServer::1=
08::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None
MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:31,519::supervdsmServer::1=
01::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
MainProcess|Thread-1736::INFO::2014-11-17 15:50:31,520::hba::54::Storage.HB=
A::(rescan) Rescanning HBAs
MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:31,520::supervdsmServer::1=
08::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:32,100::supervdsmServer::1=
01::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', (=
'qemu', 'kvm'), u'/rhev/data-center/mnt/_vmstore_isos', 5) {}
MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:32,105::supervdsmServer::1=
08::SuperVdsm.ServerCallback::(wrapper) return validateAccess with None
MainProcess|Thread-1744::DEBUG::2014-11-17 15:50:32,278::supervdsmServer::1=
01::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
MainProcess|Thread-1744::INFO::2014-11-17 15:50:32,279::hba::54::Storage.HB=
A::(rescan) Rescanning HBAs
MainProcess|Thread-1744::DEBUG::2014-11-17 15:50:32,279::supervdsmServer::1=
08::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
And the vdsm.log shows:
Thread-1732::DEBUG::2014-11-17 15:50:31,032::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`82d29b1b-0c33-4078-887b-476d95f4b1a1`::moving=
from state init -> state preparing
Thread-1732::INFO::2014-11-17 15:50:31,032::logUtils::44::dispatcher::(wrap=
per) Run and protect: repoStats(options=3DNone)
Thread-1732::INFO::2014-11-17 15:50:31,033::logUtils::47::dispatcher::(wrap=
per) Run and protect: repoStats, Return response: {u'158ef830-da69-48b4-95b=
0-3615d6fb5b00': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.00=
0627117', 'lastCheck': '8.2', 'valid': True}}
Thread-1732::DEBUG::2014-11-17 15:50:31,033::task::1191::Storage.TaskManage=
r.Task::(prepare) Task=3D`82d29b1b-0c33-4078-887b-476d95f4b1a1`::finished: =
{u'158ef830-da69-48b4-95b0-3615d6fb5b00': {'code': 0, 'version': 3, 'acquir=
ed': True, 'delay': '0.000627117', 'lastCheck': '8.2', 'valid': True}}
Thread-1732::DEBUG::2014-11-17 15:50:31,033::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`82d29b1b-0c33-4078-887b-476d95f4b1a1`::moving=
from state preparing -> state finished
Thread-1732::DEBUG::2014-11-17 15:50:31,034::resourceManager::940::Storage.=
ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources =
{}
Thread-1732::DEBUG::2014-11-17 15:50:31,034::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1732::DEBUG::2014-11-17 15:50:31,034::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`82d29b1b-0c33-4078-887b-476d95f4b1a1`::ref 0 abort=
ing False
Thread-1732::DEBUG::2014-11-17 15:50:31,042::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,060::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>
JsonRpcServer::DEBUG::2014-11-17 15:50:31,062::__init__::504::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request
Thread-1733::DEBUG::2014-11-17 15:50:31,066::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,338::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>
JsonRpcServer::DEBUG::2014-11-17 15:50:31,341::__init__::504::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request
Thread-1734::DEBUG::2014-11-17 15:50:31,341::__init__::467::jsonrpc.JsonRpc=
Server::(_serveRequest) Calling 'StoragePool.connectStorageServer' in bridg=
e with {u'connectionParams': [{u'id': u'00000000-0000-0000-0000-00000000000=
0', u'connection': u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt': u'=
1', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-000=
0-0000-000000000000', u'domainType': 4}
Thread-1734::DEBUG::2014-11-17 15:50:31,344::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`4088bef4-489c-41a4-bcd4-d5f1906378b9`::moving=
from state init -> state preparing
Thread-1734::INFO::2014-11-17 15:50:31,345::logUtils::44::dispatcher::(wrap=
per) Run and protect: connectStorageServer(domType=3D4, spUUID=3Du'00000000=
-0000-0000-0000-000000000000', conList=3D[{u'connection': u'/vmstore/isos',=
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': '******', u'id': u'=
00000000-0000-0000-0000-000000000000', u'port': u''}], options=3DNone)
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,346::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,346::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,347::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,347::__init__::3=
75::IOProcess::(_processLogs) (2321) Got request for method 'access'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,347::__init__::3=
75::IOProcess::(_processLogs) (2321) Queuing response
Thread-1734::DEBUG::2014-11-17 15:50:31,357::hsm::2389::Storage.HSM::(__pre=
fetchDomains) local _path: /vmstore/isos
Thread-1734::DEBUG::2014-11-17 15:50:31,358::hsm::2396::Storage.HSM::(__pre=
fetchDomains) Found SD uuids: ()
Thread-1734::DEBUG::2014-11-17 15:50:31,358::hsm::2452::Storage.HSM::(conne=
ctStorageServer) knownSDs: {158ef830-da69-48b4-95b0-3615d6fb5b00: storage.l=
ocalFsSD.findDomain, ac821c1f-b7ca-4534-a10f-9b98c325a070: storage.nfsSD.fi=
ndDomain}
Thread-1734::INFO::2014-11-17 15:50:31,358::logUtils::47::dispatcher::(wrap=
per) Run and protect: connectStorageServer, Return response: {'statuslist':=
[{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
Thread-1734::DEBUG::2014-11-17 15:50:31,359::task::1191::Storage.TaskManage=
r.Task::(prepare) Task=3D`4088bef4-489c-41a4-bcd4-d5f1906378b9`::finished: =
{'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'=
}]}
Thread-1734::DEBUG::2014-11-17 15:50:31,359::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`4088bef4-489c-41a4-bcd4-d5f1906378b9`::moving=
from state preparing -> state finished
Thread-1734::DEBUG::2014-11-17 15:50:31,359::resourceManager::940::Storage.=
ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources =
{}
Thread-1734::DEBUG::2014-11-17 15:50:31,359::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1734::DEBUG::2014-11-17 15:50:31,360::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`4088bef4-489c-41a4-bcd4-d5f1906378b9`::ref 0 abort=
ing False
Thread-1734::DEBUG::2014-11-17 15:50:31,360::__init__::498::jsonrpc.JsonRpc=
Server::(_serveRequest) Return 'StoragePool.connectStorageServer' in bridge=
with [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]
Thread-1734::DEBUG::2014-11-17 15:50:31,361::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,425::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>
JsonRpcServer::DEBUG::2014-11-17 15:50:31,428::__init__::504::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request
Thread-1735::DEBUG::2014-11-17 15:50:31,428::__init__::467::jsonrpc.JsonRpc=
Server::(_serveRequest) Calling 'StoragePool.connectStorageServer' in bridg=
e with {u'connectionParams': [{u'id': u'deb5a580-6994-4db9-9899-cd05c39c2ef=
a', u'connection': u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt': u'=
1', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-000=
0-0000-000000000000', u'domainType': 4}
Thread-1735::DEBUG::2014-11-17 15:50:31,431::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`1cdc582f-bca2-49c1-a48d-4463f53f0481`::moving=
from state init -> state preparing
Thread-1735::INFO::2014-11-17 15:50:31,431::logUtils::44::dispatcher::(wrap=
per) Run and protect: connectStorageServer(domType=3D4, spUUID=3Du'00000000=
-0000-0000-0000-000000000000', conList=3D[{u'connection': u'/vmstore/isos',=
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': '******', u'id': u'=
deb5a580-6994-4db9-9899-cd05c39c2efa', u'port': u''}], options=3DNone)
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,433::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,433::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,433::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,433::__init__::3=
75::IOProcess::(_processLogs) (2322) Got request for method 'access'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:31,434::__init__::3=
75::IOProcess::(_processLogs) (2322) Queuing response
Thread-1735::DEBUG::2014-11-17 15:50:31,442::hsm::2389::Storage.HSM::(__pre=
fetchDomains) local _path: /vmstore/isos
Thread-1735::DEBUG::2014-11-17 15:50:31,442::hsm::2396::Storage.HSM::(__pre=
fetchDomains) Found SD uuids: ()
Thread-1735::DEBUG::2014-11-17 15:50:31,443::hsm::2452::Storage.HSM::(conne=
ctStorageServer) knownSDs: {158ef830-da69-48b4-95b0-3615d6fb5b00: storage.l=
ocalFsSD.findDomain, ac821c1f-b7ca-4534-a10f-9b98c325a070: storage.nfsSD.fi=
ndDomain}
Thread-1735::INFO::2014-11-17 15:50:31,443::logUtils::47::dispatcher::(wrap=
per) Run and protect: connectStorageServer, Return response: {'statuslist':=
[{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'}]}
Thread-1735::DEBUG::2014-11-17 15:50:31,443::task::1191::Storage.TaskManage=
r.Task::(prepare) Task=3D`1cdc582f-bca2-49c1-a48d-4463f53f0481`::finished: =
{'statuslist': [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'=
}]}
Thread-1735::DEBUG::2014-11-17 15:50:31,444::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`1cdc582f-bca2-49c1-a48d-4463f53f0481`::moving=
from state preparing -> state finished
Thread-1735::DEBUG::2014-11-17 15:50:31,444::resourceManager::940::Storage.=
ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources =
{}
Thread-1735::DEBUG::2014-11-17 15:50:31,444::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1735::DEBUG::2014-11-17 15:50:31,444::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`1cdc582f-bca2-49c1-a48d-4463f53f0481`::ref 0 abort=
ing False
Thread-1735::DEBUG::2014-11-17 15:50:31,445::__init__::498::jsonrpc.JsonRpc=
Server::(_serveRequest) Return 'StoragePool.connectStorageServer' in bridge=
with [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'}]
Thread-1735::DEBUG::2014-11-17 15:50:31,445::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,451::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>
Thread-1736::DEBUG::2014-11-17 15:50:31,454::__init__::467::jsonrpc.JsonRpc=
Server::(_serveRequest) Calling 'StorageDomain.create' in bridge with {u'na=
me': u'test', u'domainType': 4, u'domainClass': 1, u'typeArgs': u'/vmstore/=
isos', u'version': u'3', u'storagedomainID': u'c46aa2c4-c405-45eb-b7fd-71f6=
27d1c546'}
JsonRpcServer::DEBUG::2014-11-17 15:50:31,454::__init__::504::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request
Thread-1736::DEBUG::2014-11-17 15:50:31,459::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::moving=
from state init -> state preparing
Thread-1736::INFO::2014-11-17 15:50:31,459::logUtils::44::dispatcher::(wrap=
per) Run and protect: createStorageDomain(storageType=3D4, sdUUID=3Du'c46aa=
2c4-c405-45eb-b7fd-71f627d1c546', domainName=3Du'test', typeSpecificArg=3Du=
'/vmstore/isos', domClass=3D1, domVersion=3Du'3', options=3DNone)
Thread-1736::DEBUG::2014-11-17 15:50:31,459::misc::741::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage=
)
Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::743::Storage.SamplingMet=
hod::(__call__) Got in to sampling method
Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::741::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::743::Storage.SamplingMet=
hod::(__call__) Got in to sampling method
Thread-1736::DEBUG::2014-11-17 15:50:31,461::iscsi::403::Storage.ISCSI::(re=
scan) Performing SCSI scan, this will take up to 30 seconds
Thread-1736::DEBUG::2014-11-17 15:50:31,461::iscsiadm::92::Storage.Misc.exc=
Cmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-1736::DEBUG::2014-11-17 15:50:31,517::misc::751::Storage.SamplingMet=
hod::(__call__) Returning last result
Thread-1736::DEBUG::2014-11-17 15:50:31,521::multipath::110::Storage.Misc.e=
xcCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None)
Thread-1736::DEBUG::2014-11-17 15:50:31,705::multipath::110::Storage.Misc.e=
xcCmd::(rescan) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1736::DEBUG::2014-11-17 15:50:31,706::lvm::489::Storage.OperationMut=
ex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,707::lvm::491::Storage.OperationMut=
ex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,707::lvm::500::Storage.OperationMut=
ex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::502::Storage.OperationMut=
ex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::520::Storage.OperationMut=
ex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::522::Storage.OperationMut=
ex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,709::misc::751::Storage.SamplingMet=
hod::(__call__) Returning last result
Thread-1736::ERROR::2014-11-17 15:50:31,709::sdc::137::Storage.StorageDomai=
nCache::(_findDomain) looking for unfetched domain c46aa2c4-c405-45eb-b7fd-=
71f627d1c546
Thread-1736::ERROR::2014-11-17 15:50:31,709::sdc::154::Storage.StorageDomai=
nCache::(_findUnfetchedDomain) looking for domain c46aa2c4-c405-45eb-b7fd-7=
1f627d1c546
Thread-1736::DEBUG::2014-11-17 15:50:31,710::lvm::365::Storage.OperationMut=
ex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1736::DEBUG::2014-11-17 15:50:31,712::lvm::288::Storage.Misc.excCmd:=
:(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names =
=3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 di=
sable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D [ '=
\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D1 w=
ait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_d=
ays =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreski=
ppedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count=
,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name c46aa2c4-c405-45eb-=
b7fd-71f627d1c546 (cwd None)
Thread-1736::DEBUG::2014-11-17 15:50:32,049::lvm::288::Storage.Misc.excCmd:=
:(cmd) FAILED: <err> =3D ' Volume group "c46aa2c4-c405-45eb-b7fd-71f627d1c=
546" not found\n Skipping volume group c46aa2c4-c405-45eb-b7fd-71f627d1c54=
6\n'; <rc> =3D 5
Thread-1736::WARNING::2014-11-17 15:50:32,052::lvm::370::Storage.LVM::(_rel=
oadvgs) lvm vgs failed: 5 [] [' Volume group "c46aa2c4-c405-45eb-b7fd-71f6=
27d1c546" not found', ' Skipping volume group c46aa2c4-c405-45eb-b7fd-71f6=
27d1c546']
Thread-1736::DEBUG::2014-11-17 15:50:32,052::lvm::407::Storage.OperationMut=
ex::(_reloadvgs) Operation 'lvm reload operation' released the operation mu=
tex
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,060::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,061::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,062::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,062::__init__::3=
75::IOProcess::(_processLogs) (2323) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,062::__init__::3=
75::IOProcess::(_processLogs) (2323) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,064::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,065::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,065::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,065::__init__::3=
75::IOProcess::(_processLogs) (2324) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,066::__init__::3=
75::IOProcess::(_processLogs) (2324) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,067::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,068::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,069::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,069::__init__::3=
75::IOProcess::(_processLogs) (2325) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,069::__init__::3=
75::IOProcess::(_processLogs) (2325) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,079::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,082::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,082::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,083::__init__::3=
75::IOProcess::(_processLogs) (2326) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,083::__init__::3=
75::IOProcess::(_processLogs) (2326) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,085::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,086::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,086::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,087::__init__::3=
75::IOProcess::(_processLogs) (2327) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,087::__init__::3=
75::IOProcess::(_processLogs) (2327) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,089::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,089::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,090::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,090::__init__::3=
75::IOProcess::(_processLogs) (2328) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,090::__init__::3=
75::IOProcess::(_processLogs) (2328) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,092::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,093::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,093::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,093::__init__::3=
75::IOProcess::(_processLogs) (2329) Got request for method 'glob'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,093::__init__::3=
75::IOProcess::(_processLogs) (2329) Queuing response
Thread-1736::ERROR::2014-11-17 15:50:32,095::sdc::143::Storage.StorageDomai=
nCache::(_findDomain) domain c46aa2c4-c405-45eb-b7fd-71f627d1c546 not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom =3D findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'c46aa2c4-c405-=
45eb-b7fd-71f627d1c546',)
Thread-1736::INFO::2014-11-17 15:50:32,096::localFsSD::73::Storage.StorageD=
omain::(create) sdUUID=3Dc46aa2c4-c405-45eb-b7fd-71f627d1c546 domainName=3D=
test remotePath=3D/vmstore/isos domClass=3D1
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,097::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,098::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,098::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,098::__init__::3=
75::IOProcess::(_processLogs) (2330) Got request for method 'access'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,099::__init__::3=
75::IOProcess::(_processLogs) (2330) Queuing response
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,107::__init__::3=
75::IOProcess::(_processLogs) Receiving request...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,107::__init__::3=
75::IOProcess::(_processLogs) Queuing request in the thread pool...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,108::__init__::3=
75::IOProcess::(_processLogs) Extracting request information...
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,108::__init__::3=
75::IOProcess::(_processLogs) (2331) Got request for method 'touch'
ioprocess communication (5073)::DEBUG::2014-11-17 15:50:32,108::__init__::3=
75::IOProcess::(_processLogs) (2331) Queuing response
Thread-1736::ERROR::2014-11-17 15:50:32,109::fileSD::92::Storage.fileSD::(v=
alidateFileSystemFeatures) Underlying file system doesn't supportdirect IO
Thread-1736::ERROR::2014-11-17 15:50:32,109::task::866::Storage.TaskManager=
.Task::(_setError) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::Unexpecte=
d error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res =3D f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2683, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/localFsSD.py", line 84, in create
cls._preCreateValidation(sdUUID, mntPoint, remotePath, version)
File "/usr/share/vdsm/storage/localFsSD.py", line 51, in _preCreateValida=
tion
fileSD.validateFileSystemFeatures(sdUUID, domPath)
File "/usr/share/vdsm/storage/fileSD.py", line 94, in validateFileSystemF=
eatures
raise se.StorageDomainTargetUnsupported()
StorageDomainTargetUnsupported: Storage Domain target is unsupported: ()
Thread-1736::DEBUG::2014-11-17 15:50:32,110::task::885::Storage.TaskManager=
.Task::(_run) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::Task._run: 54f=
143dd-2229-4bc1-b639-9017ea3ecdd0 (4, u'c46aa2c4-c405-45eb-b7fd-71f627d1c54=
6', u'test', u'/vmstore/isos', 1, u'3') {} failed - stopping task
Thread-1736::DEBUG::2014-11-17 15:50:32,110::task::1217::Storage.TaskManage=
r.Task::(stop) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::stopping in s=
tate preparing (force False)
Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::ref 1 abort=
ing True
Thread-1736::INFO::2014-11-17 15:50:32,111::task::1171::Storage.TaskManager=
.Task::(prepare) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::aborting: T=
ask is aborted: 'Storage Domain target is unsupported' - code 399
Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::1176::Storage.TaskManage=
r.Task::(prepare) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::Prepare: a=
borted: Storage Domain target is unsupported
Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::ref 0 abort=
ing True
Thread-1736::DEBUG::2014-11-17 15:50:32,112::task::928::Storage.TaskManager=
.Task::(_doAbort) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::Task._doAb=
ort: force False
Thread-1736::DEBUG::2014-11-17 15:50:32,112::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1736::DEBUG::2014-11-17 15:50:32,112::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::moving=
from state preparing -> state aborting
Thread-1736::DEBUG::2014-11-17 15:50:32,113::task::550::Storage.TaskManager=
.Task::(__state_aborting) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::_a=
borting: recover policy none
Thread-1736::DEBUG::2014-11-17 15:50:32,113::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b639-9017ea3ecdd0`::moving=
from state aborting -> state failed
Thread-1736::DEBUG::2014-11-17 15:50:32,113::resourceManager::940::Storage.=
ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources =
{}
Thread-1736::DEBUG::2014-11-17 15:50:32,113::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1736::ERROR::2014-11-17 15:50:32,114::dispatcher::76::Storage.Dispat=
cher::(wrapper) {'status': {'message': 'Storage Domain target is unsupporte=
d: ()', 'code': 399}}
Thread-1736::DEBUG::2014-11-17 15:50:32,114::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:32,212::stompReactor::98::B=
roker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'=
SEND'>
Thread-1744::DEBUG::2014-11-17 15:50:32,215::__init__::467::jsonrpc.JsonRpc=
Server::(_serveRequest) Calling 'StoragePool.disconnectStorageServer' in br=
idge with {u'connectionParams': [{u'id': u'deb5a580-6994-4db9-9899-cd05c39c=
2efa', u'connection': u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt':=
u'1', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-=
0000-0000-000000000000', u'domainType': 4}
JsonRpcServer::DEBUG::2014-11-17 15:50:32,215::__init__::504::jsonrpc.JsonR=
pcServer::(serve_requests) Waiting for request
Thread-1744::DEBUG::2014-11-17 15:50:32,220::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`11fde6d7-ad39-4b40-bb40-00816fe2c131`::moving=
from state init -> state preparing
Thread-1744::INFO::2014-11-17 15:50:32,221::logUtils::44::dispatcher::(wrap=
per) Run and protect: disconnectStorageServer(domType=3D4, spUUID=3Du'00000=
000-0000-0000-0000-000000000000', conList=3D[{u'connection': u'/vmstore/iso=
s', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': '******', u'id':=
u'deb5a580-6994-4db9-9899-cd05c39c2efa', u'port': u''}], options=3DNone)
Thread-1744::DEBUG::2014-11-17 15:50:32,221::misc::741::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage=
)
Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::743::Storage.SamplingMet=
hod::(__call__) Got in to sampling method
Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::741::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::743::Storage.SamplingMet=
hod::(__call__) Got in to sampling method
Thread-1744::DEBUG::2014-11-17 15:50:32,222::iscsi::403::Storage.ISCSI::(re=
scan) Performing SCSI scan, this will take up to 30 seconds
Thread-1744::DEBUG::2014-11-17 15:50:32,223::iscsiadm::92::Storage.Misc.exc=
Cmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-1744::DEBUG::2014-11-17 15:50:32,276::misc::751::Storage.SamplingMet=
hod::(__call__) Returning last result
Thread-1744::DEBUG::2014-11-17 15:50:32,280::multipath::110::Storage.Misc.e=
xcCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None)
Thread-1744::DEBUG::2014-11-17 15:50:32,409::multipath::110::Storage.Misc.e=
xcCmd::(rescan) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1744::DEBUG::2014-11-17 15:50:32,410::lvm::489::Storage.OperationMut=
ex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,410::lvm::491::Storage.OperationMut=
ex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,411::lvm::500::Storage.OperationMut=
ex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,411::lvm::502::Storage.OperationMut=
ex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,412::lvm::520::Storage.OperationMut=
ex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operat=
ion mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,412::lvm::522::Storage.OperationMut=
ex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the o=
peration mutex
Thread-1744::DEBUG::2014-11-17 15:50:32,412::misc::751::Storage.SamplingMet=
hod::(__call__) Returning last result
Thread-1744::INFO::2014-11-17 15:50:32,413::logUtils::47::dispatcher::(wrap=
per) Run and protect: disconnectStorageServer, Return response: {'statuslis=
t': [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'}]}
Thread-1744::DEBUG::2014-11-17 15:50:32,413::task::1191::Storage.TaskManage=
r.Task::(prepare) Task=3D`11fde6d7-ad39-4b40-bb40-00816fe2c131`::finished: =
{'statuslist': [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'=
}]}
Thread-1744::DEBUG::2014-11-17 15:50:32,414::task::595::Storage.TaskManager=
.Task::(_updateState) Task=3D`11fde6d7-ad39-4b40-bb40-00816fe2c131`::moving=
from state preparing -> state finished
Thread-1744::DEBUG::2014-11-17 15:50:32,414::resourceManager::940::Storage.=
ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources =
{}
Thread-1744::DEBUG::2014-11-17 15:50:32,414::resourceManager::977::Storage.=
ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1744::DEBUG::2014-11-17 15:50:32,414::task::993::Storage.TaskManager=
.Task::(_decref) Task=3D`11fde6d7-ad39-4b40-bb40-00816fe2c131`::ref 0 abort=
ing False
Thread-1744::DEBUG::2014-11-17 15:50:32,415::__init__::498::jsonrpc.JsonRpc=
Server::(_serveRequest) Return 'StoragePool.disconnectStorageServer' in bri=
dge with [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05c39c2efa'}]
Thread-1744::DEBUG::2014-11-17 15:50:32,416::stompReactor::163::yajsonrpc.S=
tompServer::(send) Sending response
--_000_EE5DB373DE433D4A87A60EE0284C92F926421EFDRISXMBX03adsuhn_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hello all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I’m exploring using oVirt 3.5 as the managemen=
t engine for a new cluster I am building. I don’t really need high av=
ailability and I have 3 servers with fast SSDs on local storage I’d l=
ike to use, preferably using ZFS to ensure reliability
of the storage.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I set up a brand new CentOS 6.6 minimal install and =
have been able to get oVirt up using the all-in-one plugin. I can use local=
storage (the default EXT4 LVM filesystem CentOS sets up) to setup a local =
data center for each of the servers
but have run into issues when I try to provision a ZFS filesystem as a loc=
al storage domain. I’m using ZFS on Linux 0.6.3.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I first had multipath.conf issues, which prevented m=
e from even setting up a zpool using the local disks. After blacklisting th=
e local disks in /etc/multipath.conf, I was able to get zfs up and running =
and create the local domains, but
I get errors when I try to create a storage Domain of type: Data / Local o=
n Host and point at the ZFS path (/vmstore/isos) (PS: I’m not trying =
to create an ISO domain, it’s just a directory name isos)<o:p></o:p><=
/p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The error message that comes up is “Error whil=
e executing action New Local Storage Domain: Storage Domain target is unsup=
ported”.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Has anyone tried this before? I am able to set up an=
NFS export of the ZFS folder as a Storage Domain (using directions here:
<a href=3D"http://virt.guru/2014/02/25/installing-ovirt-with-shared-local-s=
torage/">
http://virt.guru/2014/02/25/installing-ovirt-with-shared-local-storage/</a>=
) but if possible, I’d like to skip the NFS layer if I can and just g=
o directly to the hardware.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The supervdsm.log shows:<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1734::DEBUG::2014-11-17 15:50:31,=
350::supervdsmServer::101::SuperVdsm.ServerCallback::(wrapper) call validat=
eAccess with ('qemu', ('qemu', 'kvm'), u'/vmstore/isos', 5) {}<o:p></o:p></=
p>
<p class=3D"MsoNormal">MainProcess|Thread-1734::DEBUG::2014-11-17 15:50:31,=
356::supervdsmServer::108::SuperVdsm.ServerCallback::(wrapper) return valid=
ateAccess with None<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1735::DEBUG::2014-11-17 15:50:31,=
436::supervdsmServer::101::SuperVdsm.ServerCallback::(wrapper) call validat=
eAccess with ('qemu', ('qemu', 'kvm'), u'/vmstore/isos', 5) {}<o:p></o:p></=
p>
<p class=3D"MsoNormal">MainProcess|Thread-1735::DEBUG::2014-11-17 15:50:31,=
441::supervdsmServer::108::SuperVdsm.ServerCallback::(wrapper) return valid=
ateAccess with None<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:31,=
519::supervdsmServer::101::SuperVdsm.ServerCallback::(wrapper) call hbaResc=
an with () {}<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1736::INFO::2014-11-17 15:50:31,5=
20::hba::54::Storage.HBA::(rescan) Rescanning HBAs<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:31,=
520::supervdsmServer::108::SuperVdsm.ServerCallback::(wrapper) return hbaRe=
scan with None<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:32,=
100::supervdsmServer::101::SuperVdsm.ServerCallback::(wrapper) call validat=
eAccess with ('qemu', ('qemu', 'kvm'), u'/rhev/data-center/mnt/_vmstore_iso=
s', 5) {}<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1736::DEBUG::2014-11-17 15:50:32,=
105::supervdsmServer::108::SuperVdsm.ServerCallback::(wrapper) return valid=
ateAccess with None<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1744::DEBUG::2014-11-17 15:50:32,=
278::supervdsmServer::101::SuperVdsm.ServerCallback::(wrapper) call hbaResc=
an with () {}<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1744::INFO::2014-11-17 15:50:32,2=
79::hba::54::Storage.HBA::(rescan) Rescanning HBAs<o:p></o:p></p>
<p class=3D"MsoNormal">MainProcess|Thread-1744::DEBUG::2014-11-17 15:50:32,=
279::supervdsmServer::108::SuperVdsm.ServerCallback::(wrapper) return hbaRe=
scan with None<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">And the vdsm.log shows:<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,032::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`82d29b1b-0c33-4078-887=
b-476d95f4b1a1`::moving from state init -> state preparing<o:p></o:p></p=
>
<p class=3D"MsoNormal">Thread-1732::INFO::2014-11-17 15:50:31,032::logUtils=
::44::dispatcher::(wrapper) Run and protect: repoStats(options=3DNone)<o:p>=
</o:p></p>
<p class=3D"MsoNormal">Thread-1732::INFO::2014-11-17 15:50:31,033::logUtils=
::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u=
'158ef830-da69-48b4-95b0-3615d6fb5b00': {'code': 0, 'version': 3, 'acquired=
': True, 'delay': '0.000627117', 'lastCheck':
'8.2', 'valid': True}}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,033::task::1=
191::Storage.TaskManager.Task::(prepare) Task=3D`82d29b1b-0c33-4078-887b-47=
6d95f4b1a1`::finished: {u'158ef830-da69-48b4-95b0-3615d6fb5b00': {'code': 0=
, 'version': 3, 'acquired': True, 'delay':
'0.000627117', 'lastCheck': '8.2', 'valid': True}}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,033::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`82d29b1b-0c33-4078-887=
b-476d95f4b1a1`::moving from state preparing -> state finished<o:p></o:p=
></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,034::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,034::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,034::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`82d29b1b-0c33-4078-887b-476=
d95f4b1a1`::ref 0 aborting False<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1732::DEBUG::2014-11-17 15:50:31,042::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,0=
60::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message =
<StompFrame command=3D'SEND'><o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpcServer::DEBUG::2014-11-17 15:50:31,062::__ini=
t__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Thread-1733::DEBUG::2014-11-17 15:50:31,066::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,3=
38::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message =
<StompFrame command=3D'SEND'><o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpcServer::DEBUG::2014-11-17 15:50:31,341::__ini=
t__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,341::__init_=
_::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.connect=
StorageServer' in bridge with {u'connectionParams': [{u'id': u'00000000-000=
0-0000-0000-000000000000', u'connection':
u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': u=
'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-00000000000=
0', u'domainType': 4}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,344::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`4088bef4-489c-41a4-bcd=
4-d5f1906378b9`::moving from state init -> state preparing<o:p></o:p></p=
>
<p class=3D"MsoNormal">Thread-1734::INFO::2014-11-17 15:50:31,345::logUtils=
::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=
=3D4, spUUID=3Du'00000000-0000-0000-0000-000000000000', conList=3D[{u'conne=
ction': u'/vmstore/isos', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'password': '******', u'id': u'00000000-0000-0000-000=
0-000000000000', u'port': u''}], options=3DNone)<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,346::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,346::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,347::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,347::__init__::375::IOProcess::(_processLogs) (2321) Got request for=
method 'access'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,347::__init__::375::IOProcess::(_processLogs) (2321) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,357::hsm::23=
89::Storage.HSM::(__prefetchDomains) local _path: /vmstore/isos<o:p></o:p><=
/p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,358::hsm::23=
96::Storage.HSM::(__prefetchDomains) Found SD uuids: ()<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,358::hsm::24=
52::Storage.HSM::(connectStorageServer) knownSDs: {158ef830-da69-48b4-95b0-=
3615d6fb5b00: storage.localFsSD.findDomain, ac821c1f-b7ca-4534-a10f-9b98c32=
5a070: storage.nfsSD.findDomain}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::INFO::2014-11-17 15:50:31,358::logUtils=
::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return r=
esponse: {'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-0000=
00000000'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,359::task::1=
191::Storage.TaskManager.Task::(prepare) Task=3D`4088bef4-489c-41a4-bcd4-d5=
f1906378b9`::finished: {'statuslist': [{'status': 0, 'id': u'00000000-0000-=
0000-0000-000000000000'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,359::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`4088bef4-489c-41a4-bcd=
4-d5f1906378b9`::moving from state preparing -> state finished<o:p></o:p=
></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,359::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,359::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,360::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`4088bef4-489c-41a4-bcd4-d5f=
1906378b9`::ref 0 aborting False<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,360::__init_=
_::498::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.connectS=
torageServer' in bridge with [{'status': 0, 'id': u'00000000-0000-0000-0000=
-000000000000'}]<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1734::DEBUG::2014-11-17 15:50:31,361::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,4=
25::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message =
<StompFrame command=3D'SEND'><o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpcServer::DEBUG::2014-11-17 15:50:31,428::__ini=
t__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,428::__init_=
_::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.connect=
StorageServer' in bridge with {u'connectionParams': [{u'id': u'deb5a580-699=
4-4db9-9899-cd05c39c2efa', u'connection':
u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': u=
'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-00000000000=
0', u'domainType': 4}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,431::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`1cdc582f-bca2-49c1-a48=
d-4463f53f0481`::moving from state init -> state preparing<o:p></o:p></p=
>
<p class=3D"MsoNormal">Thread-1735::INFO::2014-11-17 15:50:31,431::logUtils=
::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=
=3D4, spUUID=3Du'00000000-0000-0000-0000-000000000000', conList=3D[{u'conne=
ction': u'/vmstore/isos', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'password': '******', u'id': u'deb5a580-6994-4db9-989=
9-cd05c39c2efa', u'port': u''}], options=3DNone)<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,433::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,433::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,433::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,433::__init__::375::IOProcess::(_processLogs) (2322) Got request for=
method 'access'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:31,434::__init__::375::IOProcess::(_processLogs) (2322) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,442::hsm::23=
89::Storage.HSM::(__prefetchDomains) local _path: /vmstore/isos<o:p></o:p><=
/p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,442::hsm::23=
96::Storage.HSM::(__prefetchDomains) Found SD uuids: ()<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,443::hsm::24=
52::Storage.HSM::(connectStorageServer) knownSDs: {158ef830-da69-48b4-95b0-=
3615d6fb5b00: storage.localFsSD.findDomain, ac821c1f-b7ca-4534-a10f-9b98c32=
5a070: storage.nfsSD.findDomain}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::INFO::2014-11-17 15:50:31,443::logUtils=
::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return r=
esponse: {'statuslist': [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-cd05=
c39c2efa'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,443::task::1=
191::Storage.TaskManager.Task::(prepare) Task=3D`1cdc582f-bca2-49c1-a48d-44=
63f53f0481`::finished: {'statuslist': [{'status': 0, 'id': u'deb5a580-6994-=
4db9-9899-cd05c39c2efa'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,444::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`1cdc582f-bca2-49c1-a48=
d-4463f53f0481`::moving from state preparing -> state finished<o:p></o:p=
></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,444::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,444::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,444::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`1cdc582f-bca2-49c1-a48d-446=
3f53f0481`::ref 0 aborting False<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,445::__init_=
_::498::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.connectS=
torageServer' in bridge with [{'status': 0, 'id': u'deb5a580-6994-4db9-9899=
-cd05c39c2efa'}]<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1735::DEBUG::2014-11-17 15:50:31,445::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:31,4=
51::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message =
<StompFrame command=3D'SEND'><o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,454::__init_=
_::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StorageDomain.creat=
e' in bridge with {u'name': u'test', u'domainType': 4, u'domainClass': 1, u=
'typeArgs': u'/vmstore/isos', u'version':
u'3', u'storagedomainID': u'c46aa2c4-c405-45eb-b7fd-71f627d1c546'}<o:p></o=
:p></p>
<p class=3D"MsoNormal">JsonRpcServer::DEBUG::2014-11-17 15:50:31,454::__ini=
t__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,459::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b63=
9-9017ea3ecdd0`::moving from state init -> state preparing<o:p></o:p></p=
>
<p class=3D"MsoNormal">Thread-1736::INFO::2014-11-17 15:50:31,459::logUtils=
::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageTyp=
e=3D4, sdUUID=3Du'c46aa2c4-c405-45eb-b7fd-71f627d1c546', domainName=3Du'tes=
t', typeSpecificArg=3Du'/vmstore/isos', domClass=3D1,
domVersion=3Du'3', options=3DNone)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,459::misc::7=
41::Storage.SamplingMethod::(__call__) Trying to enter sampling method (sto=
rage.sdc.refreshStorage)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::7=
43::Storage.SamplingMethod::(__call__) Got in to sampling method<o:p></o:p>=
</p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::7=
41::Storage.SamplingMethod::(__call__) Trying to enter sampling method (sto=
rage.iscsi.rescan)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,460::misc::7=
43::Storage.SamplingMethod::(__call__) Got in to sampling method<o:p></o:p>=
</p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,461::iscsi::=
403::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 =
seconds<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,461::iscsiad=
m::92::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m se=
ssion -R (cwd None)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,517::misc::7=
51::Storage.SamplingMethod::(__call__) Returning last result<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,521::multipa=
th::110::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cw=
d None)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,705::multipa=
th::110::Storage.Misc.excCmd::(rescan) SUCCESS: <err> =3D ''; <rc&=
gt; =3D 0<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,706::lvm::48=
9::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,707::lvm::49=
1::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,707::lvm::50=
0::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::50=
2::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::52=
0::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,708::lvm::52=
2::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,709::misc::7=
51::Storage.SamplingMethod::(__call__) Returning last result<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:31,709::sdc::13=
7::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain c=
46aa2c4-c405-45eb-b7fd-71f627d1c546<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:31,709::sdc::15=
4::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain c4=
6aa2c4-c405-45eb-b7fd-71f627d1c546<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,710::lvm::36=
5::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' go=
t the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:31,712::lvm::28=
8::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' dev=
ices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_dev=
ices=3D1 write_cache_state=3D0 disable_after_error_count=3D3
obtain_device_list_from_udev=3D0 filter =3D [ '\''r|.*|'\'' ] } glob=
al { locking_type=3D1 prioritise_write_locks=3D1 wait_for=
_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50&n=
bsp; retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|=
' --ignoreskippedcluster
-o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md=
a_size,vg_mda_free,lv_count,pv_count,pv_name c46aa2c4-c405-45eb-b7fd-71f627=
d1c546 (cwd None)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,049::lvm::28=
8::Storage.Misc.excCmd::(cmd) FAILED: <err> =3D ' Volume group =
"c46aa2c4-c405-45eb-b7fd-71f627d1c546" not found\n Skipping=
volume group c46aa2c4-c405-45eb-b7fd-71f627d1c546\n'; <rc> =3D
5<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::WARNING::2014-11-17 15:50:32,052::lvm::=
370::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group &=
quot;c46aa2c4-c405-45eb-b7fd-71f627d1c546" not found', ' Skippin=
g volume group c46aa2c4-c405-45eb-b7fd-71f627d1c546']<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,052::lvm::40=
7::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' re=
leased the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,060::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,061::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,062::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,062::__init__::375::IOProcess::(_processLogs) (2323) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,062::__init__::375::IOProcess::(_processLogs) (2323) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,064::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,065::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,065::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,065::__init__::375::IOProcess::(_processLogs) (2324) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,066::__init__::375::IOProcess::(_processLogs) (2324) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,067::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,068::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,069::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,069::__init__::375::IOProcess::(_processLogs) (2325) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,069::__init__::375::IOProcess::(_processLogs) (2325) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,079::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,082::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,082::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,083::__init__::375::IOProcess::(_processLogs) (2326) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,083::__init__::375::IOProcess::(_processLogs) (2326) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,085::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,086::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,086::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,087::__init__::375::IOProcess::(_processLogs) (2327) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,087::__init__::375::IOProcess::(_processLogs) (2327) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,089::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,089::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,090::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,090::__init__::375::IOProcess::(_processLogs) (2328) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,090::__init__::375::IOProcess::(_processLogs) (2328) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,092::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,093::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,093::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,093::__init__::375::IOProcess::(_processLogs) (2329) Got request for=
method 'glob'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,093::__init__::375::IOProcess::(_processLogs) (2329) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:32,095::sdc::14=
3::Storage.StorageDomainCache::(_findDomain) domain c46aa2c4-c405-45eb-b7fd=
-71f627d1c546 not found<o:p></o:p></p>
<p class=3D"MsoNormal">Traceback (most recent call last):<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/sdc.py&quo=
t;, line 141, in _findDomain<o:p></o:p></p>
<p class=3D"MsoNormal"> dom =3D findMethod(sdUUID)<o:p></=
o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/sdc.py&quo=
t;, line 171, in _findUnfetchedDomain<o:p></o:p></p>
<p class=3D"MsoNormal"> raise se.StorageDomainDoesNotExis=
t(sdUUID)<o:p></o:p></p>
<p class=3D"MsoNormal">StorageDomainDoesNotExist: Storage domain does not e=
xist: (u'c46aa2c4-c405-45eb-b7fd-71f627d1c546',)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::INFO::2014-11-17 15:50:32,096::localFsS=
D::73::Storage.StorageDomain::(create) sdUUID=3Dc46aa2c4-c405-45eb-b7fd-71f=
627d1c546 domainName=3Dtest remotePath=3D/vmstore/isos domClass=3D1<o:p></o=
:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,097::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,098::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,098::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,098::__init__::375::IOProcess::(_processLogs) (2330) Got request for=
method 'access'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,099::__init__::375::IOProcess::(_processLogs) (2330) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,107::__init__::375::IOProcess::(_processLogs) Receiving request...<o=
:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,107::__init__::375::IOProcess::(_processLogs) Queuing request in the=
thread pool...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,108::__init__::375::IOProcess::(_processLogs) Extracting request inf=
ormation...<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,108::__init__::375::IOProcess::(_processLogs) (2331) Got request for=
method 'touch'<o:p></o:p></p>
<p class=3D"MsoNormal">ioprocess communication (5073)::DEBUG::2014-11-17 15=
:50:32,108::__init__::375::IOProcess::(_processLogs) (2331) Queuing respons=
e<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:32,109::fileSD:=
:92::Storage.fileSD::(validateFileSystemFeatures) Underlying file system do=
esn't supportdirect IO<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:32,109::task::8=
66::Storage.TaskManager.Task::(_setError) Task=3D`54f143dd-2229-4bc1-b639-9=
017ea3ecdd0`::Unexpected error<o:p></o:p></p>
<p class=3D"MsoNormal">Traceback (most recent call last):<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/task.py&qu=
ot;, line 873, in _run<o:p></o:p></p>
<p class=3D"MsoNormal"> return fn(*args, **kargs)<o:p></o=
:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/logUtils.py",=
line 45, in wrapper<o:p></o:p></p>
<p class=3D"MsoNormal"> res =3D f(*args, **kwargs)<o:p></=
o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/hsm.py&quo=
t;, line 2683, in createStorageDomain<o:p></o:p></p>
<p class=3D"MsoNormal"> domVersion)<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/localFsSD.=
py", line 84, in create<o:p></o:p></p>
<p class=3D"MsoNormal"> cls._preCreateValidation(sdUUID, =
mntPoint, remotePath, version)<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/localFsSD.=
py", line 51, in _preCreateValidation<o:p></o:p></p>
<p class=3D"MsoNormal"> fileSD.validateFileSystemFeatures=
(sdUUID, domPath)<o:p></o:p></p>
<p class=3D"MsoNormal"> File "/usr/share/vdsm/storage/fileSD.py&=
quot;, line 94, in validateFileSystemFeatures<o:p></o:p></p>
<p class=3D"MsoNormal"> raise se.StorageDomainTargetUnsup=
ported()<o:p></o:p></p>
<p class=3D"MsoNormal">StorageDomainTargetUnsupported: Storage Domain targe=
t is unsupported: ()<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,110::task::8=
85::Storage.TaskManager.Task::(_run) Task=3D`54f143dd-2229-4bc1-b639-9017ea=
3ecdd0`::Task._run: 54f143dd-2229-4bc1-b639-9017ea3ecdd0 (4, u'c46aa2c4-c40=
5-45eb-b7fd-71f627d1c546', u'test',
u'/vmstore/isos', 1, u'3') {} failed - stopping task<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,110::task::1=
217::Storage.TaskManager.Task::(stop) Task=3D`54f143dd-2229-4bc1-b639-9017e=
a3ecdd0`::stopping in state preparing (force False)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`54f143dd-2229-4bc1-b639-901=
7ea3ecdd0`::ref 1 aborting True<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::INFO::2014-11-17 15:50:32,111::task::11=
71::Storage.TaskManager.Task::(prepare) Task=3D`54f143dd-2229-4bc1-b639-901=
7ea3ecdd0`::aborting: Task is aborted: 'Storage Domain target is unsupporte=
d' - code 399<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::1=
176::Storage.TaskManager.Task::(prepare) Task=3D`54f143dd-2229-4bc1-b639-90=
17ea3ecdd0`::Prepare: aborted: Storage Domain target is unsupported<o:p></o=
:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,111::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`54f143dd-2229-4bc1-b639-901=
7ea3ecdd0`::ref 0 aborting True<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,112::task::9=
28::Storage.TaskManager.Task::(_doAbort) Task=3D`54f143dd-2229-4bc1-b639-90=
17ea3ecdd0`::Task._doAbort: force False<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,112::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,112::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b63=
9-9017ea3ecdd0`::moving from state preparing -> state aborting<o:p></o:p=
></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,113::task::5=
50::Storage.TaskManager.Task::(__state_aborting) Task=3D`54f143dd-2229-4bc1=
-b639-9017ea3ecdd0`::_aborting: recover policy none<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,113::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`54f143dd-2229-4bc1-b63=
9-9017ea3ecdd0`::moving from state aborting -> state failed<o:p></o:p></=
p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,113::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,113::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::ERROR::2014-11-17 15:50:32,114::dispatc=
her::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Storage Doma=
in target is unsupported: ()', 'code': 399}}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1736::DEBUG::2014-11-17 15:50:32,114::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpc (StompReactor)::DEBUG::2014-11-17 15:50:32,2=
12::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message =
<StompFrame command=3D'SEND'><o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,215::__init_=
_::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.disconn=
ectStorageServer' in bridge with {u'connectionParams': [{u'id': u'deb5a580-=
6994-4db9-9899-cd05c39c2efa', u'connection':
u'/vmstore/isos', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': u=
'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-00000000000=
0', u'domainType': 4}<o:p></o:p></p>
<p class=3D"MsoNormal">JsonRpcServer::DEBUG::2014-11-17 15:50:32,215::__ini=
t__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<o:p><=
/o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,220::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`11fde6d7-ad39-4b40-bb4=
0-00816fe2c131`::moving from state init -> state preparing<o:p></o:p></p=
>
<p class=3D"MsoNormal">Thread-1744::INFO::2014-11-17 15:50:32,221::logUtils=
::44::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domTyp=
e=3D4, spUUID=3Du'00000000-0000-0000-0000-000000000000', conList=3D[{u'conn=
ection': u'/vmstore/isos', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'password': '******', u'id': u'deb5a580-6994=
-4db9-9899-cd05c39c2efa', u'port': u''}], options=3DNone)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,221::misc::7=
41::Storage.SamplingMethod::(__call__) Trying to enter sampling method (sto=
rage.sdc.refreshStorage)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::7=
43::Storage.SamplingMethod::(__call__) Got in to sampling method<o:p></o:p>=
</p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::7=
41::Storage.SamplingMethod::(__call__) Trying to enter sampling method (sto=
rage.iscsi.rescan)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,222::misc::7=
43::Storage.SamplingMethod::(__call__) Got in to sampling method<o:p></o:p>=
</p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,222::iscsi::=
403::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 =
seconds<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,223::iscsiad=
m::92::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m se=
ssion -R (cwd None)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,276::misc::7=
51::Storage.SamplingMethod::(__call__) Returning last result<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,280::multipa=
th::110::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cw=
d None)<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,409::multipa=
th::110::Storage.Misc.excCmd::(rescan) SUCCESS: <err> =3D ''; <rc&=
gt; =3D 0<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,410::lvm::48=
9::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,410::lvm::49=
1::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,411::lvm::50=
0::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,411::lvm::50=
2::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,412::lvm::52=
0::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate op=
eration' got the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,412::lvm::52=
2::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate op=
eration' released the operation mutex<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,412::misc::7=
51::Storage.SamplingMethod::(__call__) Returning last result<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::INFO::2014-11-17 15:50:32,413::logUtils=
::47::dispatcher::(wrapper) Run and protect: disconnectStorageServer, Retur=
n response: {'statuslist': [{'status': 0, 'id': u'deb5a580-6994-4db9-9899-c=
d05c39c2efa'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,413::task::1=
191::Storage.TaskManager.Task::(prepare) Task=3D`11fde6d7-ad39-4b40-bb40-00=
816fe2c131`::finished: {'statuslist': [{'status': 0, 'id': u'deb5a580-6994-=
4db9-9899-cd05c39c2efa'}]}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,414::task::5=
95::Storage.TaskManager.Task::(_updateState) Task=3D`11fde6d7-ad39-4b40-bb4=
0-00816fe2c131`::moving from state preparing -> state finished<o:p></o:p=
></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,414::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,414::resourc=
eManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll r=
equests {}<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,414::task::9=
93::Storage.TaskManager.Task::(_decref) Task=3D`11fde6d7-ad39-4b40-bb40-008=
16fe2c131`::ref 0 aborting False<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,415::__init_=
_::498::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.disconne=
ctStorageServer' in bridge with [{'status': 0, 'id': u'deb5a580-6994-4db9-9=
899-cd05c39c2efa'}]<o:p></o:p></p>
<p class=3D"MsoNormal">Thread-1744::DEBUG::2014-11-17 15:50:32,416::stompRe=
actor::163::yajsonrpc.StompServer::(send) Sending response<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_EE5DB373DE433D4A87A60EE0284C92F926421EFDRISXMBX03adsuhn_--
10 years
Re: [ovirt-users] ovirt-websocket-proxy uses the wrong IP
by mots
------=_Part_36_779334652.1416253714037
Content-Type: multipart/alternative;
boundary="=_B2Bqz28RlRkiQt0tI2lBQMQUJD-DsqIJ4LiGs35u4K91tsvv"
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_B2Bqz28RlRkiQt0tI2lBQMQUJD-DsqIJ4LiGs35u4K91tsvv
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Thank you for your reply.=0D=0A=0D=0AThe engine resolves the node's IP co=
rrectly. A few details about the setup: It's a two node cluster with shar=
ed, internal storage using DRBD. The storage is managed by pacemaker, so =
that the node which currently serves as iscsi target gets assigned the ad=
ditional IP.=20=0D=0AThe Engine/websocket-proxy then always connects to t=
he storage IP when it attempts to connect to the node which currently fun=
ctions as iscsi target. This only happens in oVirt 3.5, and I was able to=
"fix" it by going back to 3.4.=0D=0A=0D=0ARegards,=0D=0A=0D=0Amots=0D=0A=
=20=0D=0A-----Urspr=C3=BCngliche Nachricht-----=0D=0A> Von:Simone Tirabos=
chi <stirabos(a)redhat.com <mailto:stirabos@redhat.com> >=0D=0A> Gesendet: =
Mon 17 November 2014 18:11=0D=0A> An: Patrick Lottenbach <pl(a)a-bot.ch <ma=
ilto:pl@a-bot.ch> >=0D=0A> CC: users(a)ovirt.org <mailto:users@ovirt.org>=20=
=0D=0A> Betreff: Re: [ovirt-users] ovirt-websocket-proxy uses the wrong I=
P=0D=0A>=20=0D=0A>=20=0D=0A>=20=0D=0A>=20=0D=0A>=20=0D=0A> ----- Original=
Message -----=0D=0A> > From: "mots" <mots(a)nepu.moe <mailto:mots@nepu.moe=
> >=0D=0A> > To: users(a)ovirt.org <mailto:users@ovirt.org>=20=0D=0A> > Sen=
t: Saturday, November 15, 2014 10:24:29 PM=0D=0A> > Subject: [ovirt-users=
] ovirt-websocket-proxy uses the wrong IP=0D=0A> >=20=0D=0A> > ovirt-webs=
ocket-proxy uses the wrong IP Hello,=0D=0A> >=20=0D=0A> > One of my nodes=
has two IP addresses, 10.42.0.101 and 10.42.0.103. Ovirt is=0D=0A> > con=
figured to use 10.42.0.101, yet the ovirt-websocket-proxy service tries=0D=
=0A> > to connect to 10.42.0.103, where no VNC server is listening.=0D=0A=
> >=20=0D=0A> > Is there any way I can configure it to use the correct ad=
dress=3F=0D=0A> >=20=0D=0A> > [root@engine =CB=9C]#=0D=0A> > /usr/share/o=
virt-engine/services/ovirt-websocket-proxy/ovirt-websocket-proxy.py=0D=0A=
> > --debug start=0D=0A> > ovirt-websocket-proxy[1838] DEBUG _daemon:403 =
daemon entry pid=3D1838=0D=0A> > ovirt-websocket-proxy[1838] DEBUG _daemo=
n:404 background=3DFalse=0D=0A> > ovirt-websocket-proxy[1838] DEBUG loadF=
ile:70 loading config=0D=0A> > '/usr/share/ovirt-engine/services/ovirt-we=
bsocket-proxy/ovirt-websocket-proxy.conf'=0D=0A> > ovirt-websocket-proxy[=
1838] DEBUG loadFile:70 loading config=0D=0A> > '/etc/ovirt-engine/ovirt-=
websocket-proxy.conf.d/10-setup.conf'=0D=0A> > ovirt-websocket-proxy[1838=
] DEBUG _daemon:440 I am a daemon 1838=0D=0A> > ovirt-websocket-proxy[183=
8] DEBUG _setLimits:377 Setting rlimits=0D=0A> > WebSocket server setting=
s:=0D=0A> > - Listen on *:6100=0D=0A>=20=0D=0A> The WebSocketProxy is lis=
tening on all the available IPs, so no problems on that side.=0D=0A>=20=0D=
=0A> > - Flash security policy server=0D=0A> > - SSL/TLS support=0D=0A> >=
- Deny non-SSL/TLS connections=0D=0A> > - proxying from *:6100 to target=
s in /dummy=0D=0A> >=20=0D=0A> > 1: 10.42.0.1: new handler Process=0D=0A>=
> 1: 10.42.0.1: SSL/TLS (wss://) WebSocket connection=0D=0A> > 1: 10.42.=
0.1: Version hybi-13, base64: 'True'=0D=0A> > 1: 10.42.0.1: Path:=0D=0A> =
> '/eyJ2YWxpZFRvIjoiMjAxNDExMTUyMTI1MDgiLCJkYXRhIjoiJTdCJTIyaG9zdCUyMjolM=
jIxMC40Mi4wLjEwMyUyMiwlMjJwb3J0JTIyOiUyMjU5MDElMjIsJTIyc3NsX3RhcmdldCUyMj=
pmYWxzZSU3RCIsInZhbGlkRnJvbSI6IjIwMTQxMTE1MjEyMzA4Iiwic2lnbmVkRmllbGRzIjo=
idmFsaWRUbyxkYXRhLHZhbGlkRnJvbSxzYWx0Iiwic2lnbmF0dXJlIjoiajRQUmxwYjBvT0dO=
ZUNPaHZKK01wUTVrVGRMYVA0Sm8zRDIzTGlXRlZYRm4xNU9KN0NZVmw5OTBpNTBUNzlVZkpqU=
zRlRmZ1SHJhT1c4TlFNbXIwanZXSUpTWCtnL3RYSnc4MWRFS2wrcFVPVHo3MWlmY2dTbXdITm=
ptOUkwTTl6Q0NNR2dvbE1BRzZwMndFbDFySDdSZkhMWnIvOGo4bnpnVGZ0NlhaOTdBcHgyejh=
kMlo0UjRmdklXemtXMjErMDdsNWw4dXpNVytEM1FmaWdDS1Q3V3VKdlFHNi9SSC9zZWRBWHJX=
cnFUNXYzTHNuNVl0MWtYb2lGV3ZYOHNUdE5PdGdvQWk3eGN5WUhGaEM1ei9SMjZXNEkrSlJNc=
DZlVDNxbWVlZnM0eWRSN0NpZWwzZWZvZDB5TU9meGJwMG9EMGlscXVWUWVjK1JxeGxqd21ZVG=
5BPT0iLCJzYWx0IjoiWGhVQ1dYL2hQU1U9In0=3D'=0D=0A> > 1: connecting to: 10.4=
2.0.103:5901=0D=0A>=20=0D=0A> The engine is instructing the WebSocket pro=
xy to connect to the host on the wrong IP address.=0D=0A> Are you using a=
n all-in-one setup where the engine, KVM and the websocketproxy are on a =
single machine=3F=0D=0A> Can you please check how the engine machine reso=
lve the host name=3F=0D=0A>=20=0D=0A>=20=0D=0A> > 1: handler exception: [=
Errno 111] Connection refused=0D=0A> > 1: Traceback (most recent call las=
t):=0D=0A> > File "/usr/lib/python2.6/site-packages/websockify/websocket.=
py", line 711, in=0D=0A> > top_new_client=0D=0A> > self.new_client()=0D=0A=
> > File "/usr/lib/python2.6/site-packages/websockify/websocketproxy.py",=
line=0D=0A> > 183, in new_client=0D=0A> > connect=3DTrue, use_ssl=3Dself=
=2Essl_target, unix_socket=3Dself.unix_target)=0D=0A> > File "/usr/lib/py=
thon2.6/site-packages/websockify/websocket.py", line 188, in=0D=0A> > soc=
ket=0D=0A> > sock.connect(addrs[0][4])=0D=0A> > File "<string>", line 1, =
in connect=0D=0A> > error: [Errno 111] Connection refused=0D=0A> >=20=0D=0A=
> >=20=0D=0A> > Regards,=0D=0A> >=20=0D=0A> > mots=0D=0A> >=20=0D=0A> > _=
______________________________________________=0D=0A> > Users mailing lis=
t=0D=0A> > Users(a)ovirt.org <mailto:Users@ovirt.org>=20=0D=0A> > http://li=
sts.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/list=
info/users>=20=0D=0A> >=20=0D=0A>=20=0D=0A=0D=0A
--=_B2Bqz28RlRkiQt0tI2lBQMQUJD-DsqIJ4LiGs35u4K91tsvv
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
=0D=0A<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "htt=
p://www.w3.org/TR/html4/loose.dtd"><html>=0D=0A<head>=0D=0A <meta name=3D=
"Generator" content=3D"Zarafa WebApp v7.1.10-44973">=0D=0A <meta http-eq=
uiv=3D"Content-Type" content=3D"text/html; charset=3Dutf-8">=0D=0A <titl=
e>AW: [ovirt-users] ovirt-websocket-proxy uses the wrong IP</title>=0D=0A=
</head>=0D=0A<body>=0D=0A<font face=3D"tahoma" size=3D"2">Thank you for y=
our reply.<br><br>The engine resolves the node's IP correctly. A few deta=
ils about the setup: It's a two node cluster with shared, internal storag=
e using DRBD. The storage is managed by pacemaker, so that the node which=
currently serves as iscsi target gets assigned the additional IP. <br>Th=
e Engine/websocket-proxy then always connects to the storage IP when it a=
ttempts to connect to the node which currently functions as iscsi target.=
This only happens in oVirt 3.5, and I was able to "fix" it by going back=
to 3.4.<br></font><pre style=3D"white-space: -moz-pre-wrap; white-space:=
-pre-wrap; white-space: -o-pre-wrap; white-space: pre-wrap; word-wrap: b=
reak-word;" wrap=3D"">Regards,<br><br>mots<br> <br>-----Urspr=C3=BCnglich=
e Nachricht-----<br>> Von:Simone Tiraboschi <<a href=3D"mailto:stir=
abos(a)redhat.com">stirabos(a)redhat.com</a>><br>> Gesendet: Mon 17 Nov=
ember 2014 18:11<br>> An: Patrick Lottenbach <<a href=3D"mailto:pl@=
a-bot.ch">pl(a)a-bot.ch</a>><br>> CC: <a href=3D"mailto:users@ovirt.o=
rg">users(a)ovirt.org</a><br>> Betreff: Re: [ovirt-users] ovirt-websocke=
t-proxy uses the wrong IP<br>> <br>> <br>> <br>> <br>> <br=
>> ----- Original Message -----<br>> > From: "mots" <<a href=3D=
"mailto:mots@nepu.moe">mots(a)nepu.moe</a>><br>> > To: <a href=3D"=
mailto:users@ovirt.org">users(a)ovirt.org</a><br>> > Sent: Saturday, =
November 15, 2014 10:24:29 PM<br>> > Subject: [ovirt-users] ovirt-w=
ebsocket-proxy uses the wrong IP<br>> > <br>> > ovirt-websock=
et-proxy uses the wrong IP Hello,<br>> > <br>> > One of my no=
des has two IP addresses, 10.42.0.101 and 10.42.0.103. Ovirt is<br>> &=
gt; configured to use 10.42.0.101, yet the ovirt-websocket-proxy service =
tries<br>> > to connect to 10.42.0.103, where no VNC server is list=
ening.<br>> > <br>> > Is there any way I can configure it to =
use the correct address=3F<br>> > <br>> > [root@engine =CB=9C=
]#<br>> > /usr/share/ovirt-engine/services/ovirt-websocket-proxy/ov=
irt-websocket-proxy.py<br>> > --debug start<br>> > ovirt-webs=
ocket-proxy[1838] DEBUG _daemon:403 daemon entry pid=3D1838<br>> > =
ovirt-websocket-proxy[1838] DEBUG _daemon:404 background=3DFalse<br>> =
> ovirt-websocket-proxy[1838] DEBUG loadFile:70 loading config<br>>=
> '/usr/share/ovirt-engine/services/ovirt-websocket-proxy/ovirt-webso=
cket-proxy.conf'<br>> > ovirt-websocket-proxy[1838] DEBUG loadFile:=
70 loading config<br>> > '/etc/ovirt-engine/ovirt-websocket-proxy.c=
onf.d/10-setup.conf'<br>> > ovirt-websocket-proxy[1838] DEBUG _daem=
on:440 I am a daemon 1838<br>> > ovirt-websocket-proxy[1838] DEBUG =
_setLimits:377 Setting rlimits<br>> > WebSocket server settings:<br=
>> > - Listen on *:6100<br>> <br>> The WebSocketProxy is list=
ening on all the available IPs, so no problems on that side.<br>> <br>=
> > - Flash security policy server<br>> > - SSL/TLS support<b=
r>> > - Deny non-SSL/TLS connections<br>> > - proxying from *=
:6100 to targets in /dummy<br>> > <br>> > 1: 10.42.0.1: new h=
andler Process<br>> > 1: 10.42.0.1: SSL/TLS (wss://) WebSocket conn=
ection<br>> > 1: 10.42.0.1: Version hybi-13, base64: 'True'<br>>=
> 1: 10.42.0.1: Path:<br>> > '/eyJ2YWxpZFRvIjoiMjAxNDExMTUyMTI1=
MDgiLCJkYXRhIjoiJTdCJTIyaG9zdCUyMjolMjIxMC40Mi4wLjEwMyUyMiwlMjJwb3J0JTIyO=
iUyMjU5MDElMjIsJTIyc3NsX3RhcmdldCUyMjpmYWxzZSU3RCIsInZhbGlkRnJvbSI6IjIwMT=
QxMTE1MjEyMzA4Iiwic2lnbmVkRmllbGRzIjoidmFsaWRUbyxkYXRhLHZhbGlkRnJvbSxzYWx=
0Iiwic2lnbmF0dXJlIjoiajRQUmxwYjBvT0dOZUNPaHZKK01wUTVrVGRMYVA0Sm8zRDIzTGlX=
RlZYRm4xNU9KN0NZVmw5OTBpNTBUNzlVZkpqUzRlRmZ1SHJhT1c4TlFNbXIwanZXSUpTWCtnL=
3RYSnc4MWRFS2wrcFVPVHo3MWlmY2dTbXdITmptOUkwTTl6Q0NNR2dvbE1BRzZwMndFbDFySD=
dSZkhMWnIvOGo4bnpnVGZ0NlhaOTdBcHgyejhkMlo0UjRmdklXemtXMjErMDdsNWw4dXpNVyt=
EM1FmaWdDS1Q3V3VKdlFHNi9SSC9zZWRBWHJXcnFUNXYzTHNuNVl0MWtYb2lGV3ZYOHNUdE5P=
dGdvQWk3eGN5WUhGaEM1ei9SMjZXNEkrSlJNcDZlVDNxbWVlZnM0eWRSN0NpZWwzZWZvZDB5T=
U9meGJwMG9EMGlscXVWUWVjK1JxeGxqd21ZVG5BPT0iLCJzYWx0IjoiWGhVQ1dYL2hQU1U9In=
0=3D'<br>> > 1: connecting to: 10.42.0.103:5901<br>> <br>> Th=
e engine is instructing the WebSocket proxy to connect to the host on the=
wrong IP address.<br>> Are you using an all-in-one setup where the en=
gine, KVM and the websocketproxy are on a single machine=3F<br>> Can y=
ou please check how the engine machine resolve the host name=3F<br>> <=
br>> <br>> > 1: handler exception: [Errno 111] Connection refuse=
d<br>> > 1: Traceback (most recent call last):<br>> > File "/=
usr/lib/python2.6/site-packages/websockify/websocket.py", line 711, in<br=
>> > top_new_client<br>> > self.new_client()<br>> > Fil=
e "/usr/lib/python2.6/site-packages/websockify/websocketproxy.py", line<b=
r>> > 183, in new_client<br>> > connect=3DTrue, use_ssl=3Dsel=
f.ssl_target, unix_socket=3Dself.unix_target)<br>> > File "/usr/lib=
/python2.6/site-packages/websockify/websocket.py", line 188, in<br>> &=
gt; socket<br>> > sock.connect(addrs[0][4])<br>> > File "<=
string>", line 1, in connect<br>> > error: [Errno 111] Connectio=
n refused<br>> > <br>> > <br>> > Regards,<br>> > =
<br>> > mots<br>> > <br>> > ___________________________=
____________________<br>> > Users mailing list<br>> > <a href=
=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>> > <a href=3D"h=
ttp://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank">http://li=
sts.ovirt.org/mailman/listinfo/users</a><br>> > <br>> </pre>=0D=0A=
</body>=0D=0A</html>
--=_B2Bqz28RlRkiQt0tI2lBQMQUJD-DsqIJ4LiGs35u4K91tsvv--
------=_Part_36_779334652.1416253714037
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.8.6-4)
iQEcBAEBCAAGBQJUalESAAoJELfzdzVzTtoKeF0H/1YQt1E5Xa6ddb/QDmr+a0xm
l0YZi8SP2bDvKsktd5u1skXpXp5BMhYYQL77P9/dvg/nUudz5Pv+MNLtMc/VeWoS
X46jIJBsL8l+eg5hTa/LocJZ2mBRkZnbwSYUuNKFNrHc7hDhWXwNBZsObrXsBQIK
4wllZgmAhm+yUxX2S2oWDRF76XkuSnQdGLjVHySfrD4/ko39+hQyOZWjKNFUlNB2
WXwfI6xurJDccVWW9wLndt6JO8s2uKf2NcDLSDaY1abi2gFEqx1iIFxY3v4YV+LT
tIc4HwK/BqE+HusOVAxLL20tvV4ki+2/2g4jsYmXjpPYyWByLspV1IcHKnbtbpw=
=u///
-----END PGP SIGNATURE-----
------=_Part_36_779334652.1416253714037--
10 years
ovirt-websocket-proxy uses the wrong IP
by mots
------=_Part_27_1351307630.1416086778054
Content-Type: multipart/alternative;
boundary="=_diYpkdcCc+XflEeiQy00J5SerIjuuCS5jgM5BJ73AfiuYSQ8"
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_diYpkdcCc+XflEeiQy00J5SerIjuuCS5jgM5BJ73AfiuYSQ8
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,=0D=0A=0D=0AOne of my nodes has two IP addresses, 10.42.0.101 and 1=
0.42.0.103. Ovirt is configured to use 10.42.0.101, yet the ovirt-websock=
et-proxy service tries to connect to 10.42.0.103, where no VNC server is =
listening.=0D=0A=0D=0AIs there any way I can configure it to use the corr=
ect address=3F=0D=0A=0D=0A[root@engine ~]# /usr/share/ovirt-engine/servic=
es/ovirt-websocket-proxy/ovirt-websocket-proxy.py --debug start=0D=0Aovir=
t-websocket-proxy[1838] DEBUG _daemon:403 daemon entry pid=3D1838=0D=0Aov=
irt-websocket-proxy[1838] DEBUG _daemon:404 background=3DFalse=0D=0Aovirt=
-websocket-proxy[1838] DEBUG loadFile:70 loading config '/usr/share/ovirt=
-engine/services/ovirt-websocket-proxy/ovirt-websocket-proxy.conf'=0D=0Ao=
virt-websocket-proxy[1838] DEBUG loadFile:70 loading config '/etc/ovirt-e=
ngine/ovirt-websocket-proxy.conf.d/10-setup.conf'=0D=0Aovirt-websocket-pr=
oxy[1838] DEBUG _daemon:440 I am a daemon 1838=0D=0Aovirt-websocket-proxy=
[1838] DEBUG _setLimits:377 Setting rlimits=0D=0AWebSocket server setting=
s:=0D=0A=C2=A0 - Listen on *:6100=0D=0A=C2=A0 - Flash security policy ser=
ver=0D=0A=C2=A0 - SSL/TLS support=0D=0A=C2=A0 - Deny non-SSL/TLS connecti=
ons=0D=0A=C2=A0 - proxying from *:6100 to targets in /dummy=0D=0A=0D=0A=C2=
=A0 1: 10.42.0.1: new handler Process=0D=0A=C2=A0 1: 10.42.0.1: SSL/TLS (=
wss://) WebSocket connection=0D=0A=C2=A0 1: 10.42.0.1: Version hybi-13, b=
ase64: 'True'=0D=0A=C2=A0 1: 10.42.0.1: Path: '/eyJ2YWxpZFRvIjoiMjAxNDExM=
TUyMTI1MDgiLCJkYXRhIjoiJTdCJTIyaG9zdCUyMjolMjIxMC40Mi4wLjEwMyUyMiwlMjJwb3=
J0JTIyOiUyMjU5MDElMjIsJTIyc3NsX3RhcmdldCUyMjpmYWxzZSU3RCIsInZhbGlkRnJvbSI=
6IjIwMTQxMTE1MjEyMzA4Iiwic2lnbmVkRmllbGRzIjoidmFsaWRUbyxkYXRhLHZhbGlkRnJv=
bSxzYWx0Iiwic2lnbmF0dXJlIjoiajRQUmxwYjBvT0dOZUNPaHZKK01wUTVrVGRMYVA0Sm8zR=
DIzTGlXRlZYRm4xNU9KN0NZVmw5OTBpNTBUNzlVZkpqUzRlRmZ1SHJhT1c4TlFNbXIwanZXSU=
pTWCtnL3RYSnc4MWRFS2wrcFVPVHo3MWlmY2dTbXdITmptOUkwTTl6Q0NNR2dvbE1BRzZwMnd=
FbDFySDdSZkhMWnIvOGo4bnpnVGZ0NlhaOTdBcHgyejhkMlo0UjRmdklXemtXMjErMDdsNWw4=
dXpNVytEM1FmaWdDS1Q3V3VKdlFHNi9SSC9zZWRBWHJXcnFUNXYzTHNuNVl0MWtYb2lGV3ZYO=
HNUdE5PdGdvQWk3eGN5WUhGaEM1ei9SMjZXNEkrSlJNcDZlVDNxbWVlZnM0eWRSN0NpZWwzZW=
ZvZDB5TU9meGJwMG9EMGlscXVWUWVjK1JxeGxqd21ZVG5BPT0iLCJzYWx0IjoiWGhVQ1dYL2h=
QU1U9In0=3D'=0D=0A=C2=A0 1: connecting to: 10.42.0.103:5901=0D=0A=C2=A0 1=
: handler exception: [Errno 111] Connection refused=0D=0A=C2=A0 1: Traceb=
ack (most recent call last):=0D=0A=C2=A0 File "/usr/lib/python2.6/site-pa=
ckages/websockify/websocket.py", line 711, in top_new_client=0D=0A=C2=A0=C2=
=A0=C2=A0 self.new_client()=0D=0A=C2=A0 File "/usr/lib/python2.6/site-pac=
kages/websockify/websocketproxy.py", line 183, in new_client=0D=0A=C2=A0=C2=
=A0=C2=A0 connect=3DTrue, use_ssl=3Dself.ssl_target, unix_socket=3Dself.u=
nix_target)=0D=0A=C2=A0 File "/usr/lib/python2.6/site-packages/websockify=
/websocket.py", line 188, in socket=0D=0A=C2=A0=C2=A0=C2=A0 sock.connect(=
addrs[0][4])=0D=0A=C2=A0 File "<string>", line 1, in connect=0D=0Aerror: =
[Errno 111] Connection refused=0D=0A=0D=0A=0D=0ARegards,=0D=0A=0D=0Amots=0D=
=0A
--=_diYpkdcCc+XflEeiQy00J5SerIjuuCS5jgM5BJ73AfiuYSQ8
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
=0D=0A<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "htt=
p://www.w3.org/TR/html4/loose.dtd"><html>=0D=0A<head>=0D=0A <meta name=3D=
"Generator" content=3D"Zarafa WebApp v7.1.10-44973">=0D=0A <meta http-eq=
uiv=3D"Content-Type" content=3D"text/html; charset=3Dutf-8">=0D=0A <titl=
e>ovirt-websocket-proxy uses the wrong IP</title>=0D=0A</head>=0D=0A<body=
>=0D=0A<font size=3D"2" face=3D"tahoma">Hello,<br><br>One of my nodes </f=
ont><font size=3D"2" face=3D"tahoma">has two IP addresses, 10.42.0.101 an=
d 10.42.0.103. Ovirt is configured to use 10.42.0.101, yet the ovirt-webs=
ocket-proxy service tries to connect to 10.42.0.103, where no VNC server =
is listening.<br><br>Is there any way I can configure it to use the corre=
ct address=3F<br><br>[root@engine ~]# /usr/share/ovirt-engine/services/ov=
irt-websocket-proxy/ovirt-websocket-proxy.py --debug start<br>ovirt-webso=
cket-proxy[1838] DEBUG _daemon:403 daemon entry pid=3D1838<br>ovirt-webso=
cket-proxy[1838] DEBUG _daemon:404 background=3DFalse<br>ovirt-websocket-=
proxy[1838] DEBUG loadFile:70 loading config '/usr/share/ovirt-engine/ser=
vices/ovirt-websocket-proxy/ovirt-websocket-proxy.conf'<br>ovirt-websocke=
t-proxy[1838] DEBUG loadFile:70 loading config '/etc/ovirt-engine/ovirt-w=
ebsocket-proxy.conf.d/10-setup.conf'<br>ovirt-websocket-proxy[1838] DEBUG=
_daemon:440 I am a daemon 1838<br>ovirt-websocket-proxy[1838] DEBUG _set=
Limits:377 Setting rlimits<br>WebSocket server settings:<br> - List=
en on *:6100<br> - Flash security policy server<br> - SSL/TLS=
support<br> - Deny non-SSL/TLS connections<br> - proxying fr=
om *:6100 to targets in /dummy<br><br> 1: 10.42.0.1: new handler Pr=
ocess<br> 1: 10.42.0.1: SSL/TLS (wss://) WebSocket connection<br>&n=
bsp; 1: 10.42.0.1: Version hybi-13, base64: 'True'<br> 1: 10.42.0.1=
: Path: '/eyJ2YWxpZFRvIjoiMjAxNDExMTUyMTI1MDgiLCJkYXRhIjoiJTdCJTIyaG9zdCU=
yMjolMjIxMC40Mi4wLjEwMyUyMiwlMjJwb3J0JTIyOiUyMjU5MDElMjIsJTIyc3NsX3Rhcmdl=
dCUyMjpmYWxzZSU3RCIsInZhbGlkRnJvbSI6IjIwMTQxMTE1MjEyMzA4Iiwic2lnbmVkRmllb=
GRzIjoidmFsaWRUbyxkYXRhLHZhbGlkRnJvbSxzYWx0Iiwic2lnbmF0dXJlIjoiajRQUmxwYj=
BvT0dOZUNPaHZKK01wUTVrVGRMYVA0Sm8zRDIzTGlXRlZYRm4xNU9KN0NZVmw5OTBpNTBUNzl=
VZkpqUzRlRmZ1SHJhT1c4TlFNbXIwanZXSUpTWCtnL3RYSnc4MWRFS2wrcFVPVHo3MWlmY2dT=
bXdITmptOUkwTTl6Q0NNR2dvbE1BRzZwMndFbDFySDdSZkhMWnIvOGo4bnpnVGZ0NlhaOTdBc=
HgyejhkMlo0UjRmdklXemtXMjErMDdsNWw4dXpNVytEM1FmaWdDS1Q3V3VKdlFHNi9SSC9zZW=
RBWHJXcnFUNXYzTHNuNVl0MWtYb2lGV3ZYOHNUdE5PdGdvQWk3eGN5WUhGaEM1ei9SMjZXNEk=
rSlJNcDZlVDNxbWVlZnM0eWRSN0NpZWwzZWZvZDB5TU9meGJwMG9EMGlscXVWUWVjK1JxeGxq=
d21ZVG5BPT0iLCJzYWx0IjoiWGhVQ1dYL2hQU1U9In0=3D'<br> 1: connecting t=
o: 10.42.0.103:5901<br> 1: handler exception: [Errno 111] Connectio=
n refused<br> 1: Traceback (most recent call last):<br> File =
"/usr/lib/python2.6/site-packages/websockify/websocket.py", line 711, in =
top_new_client<br> self.new_client()<br> File "/u=
sr/lib/python2.6/site-packages/websockify/websocketproxy.py", line 183, i=
n new_client<br> connect=3DTrue, use_ssl=3Dself.ssl_tar=
get, unix_socket=3Dself.unix_target)<br> File "/usr/lib/python2.6/s=
ite-packages/websockify/websocket.py", line 188, in socket<br>  =
; sock.connect(addrs[0][4])<br> File "<string>", line 1=
, in connect<br>error: [Errno 111] Connection refused<br><br><br>Regards,=
<br><br>mots<br></font>=0D=0A</body>=0D=0A</html>
--=_diYpkdcCc+XflEeiQy00J5SerIjuuCS5jgM5BJ73AfiuYSQ8--
------=_Part_27_1351307630.1416086778054
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.8.6-4)
iQEcBAEBCAAGBQJUZ8T6AAoJELfzdzVzTtoK3K0H/2O2w3FZAvdP+AxXEzwrZIMR
W9dBZ8yuX3gyhzu5LdANuu4BVjMStfJ1bK4VdWyyp2DquR5hVtbBv1C3obBquDok
rxByShXWm5YdVvWaDzxHiUtTuIqy0vqfQgp8KDHRUOubhqSEfqHFuAs/GOLJqlSv
x/mo4lSrQ5IK+OiNh83fEw+S9Y5O5lems0IT3XQH6xRR7YBP0Dt0b4Rdg+qSt9GH
3VYFnxbYAY7mp7tCQPJUGSBwuclOMNgHLSGa9FapFKnFm6+ionXCuEoQFc19F7z4
u5QrogtXjTAqp26/VKW1RiSkgtbQZhLHz8xYQAuav6VKALpwTaGOwLKN4O+jYo4=
=Tx0F
-----END PGP SIGNATURE-----
------=_Part_27_1351307630.1416086778054--
10 years
can't create new disk for vm
by Nathanaël Blanchet
Hi all,
Since I upgraded engine to 3.5, I can't add new disk anymore to
existant vms, none than create a new vm with a new disk. Clicking on OK
button after the disk definition does nothing. I tried many situations
(preallocated or thin provision) or changing of storage domain, but it
is the same. Has anybody encountered the same issue?
10 years
noVNC problems after upgrading to 3.5.0
by Darrell Budic
I had noVNC working under 3.4, but can’t seem to get it back up after updating to 3.5.0. VNC is working if I make direct connections, but it looks like the web socket proxy never tries to connect to the host server. noVNC is just reporting then generic 1006 error. Firefox reports it already has the right ca.crt installed, so it’s not that. From watching the network, it looks like it never gets authenticated properly to the web proxy, and never tries to connect on from there.
Any way to get some debugging info for the web socket proxy? Not locating any in the usual log files when I try this…
Anyone else seeing a similar problem?
-Darrell
10 years
Strange error messages
by Demeter Tibor
------=_Part_3534596_2072970975.1416207306635
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
In this morning I have got a lot of similar messages to console:
2014-Nov-17, 03:21
Detected conflict in hook set-POST-30samba-set.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook stop-PRE-29CTDB-teardown.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook add-brick-PRE-28Quota-enable-root-xattr-heal.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook set-POST-31ganesha-set.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook start-POST-30samba-start.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook reset-POST-31ganesha-reset.sh of Cluster r710cluster1.
2014-Nov-17, 03:21
Detected conflict in hook gsync-create-POST-56glusterd-geo-rep-create-post.sh of Cluster r710cluster1.
What does this mean?
The system seems to be working.
Thanks:
Tibor
------=_Part_3534596_2072970975.1416207306635
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Hi, <br></div><div><br></div><di=
v>In this morning I have got a lot of similar messages to console: <br></di=
v><div><br></div><div> <br>2=
014-Nov-17, 03:21 <br> <br>D=
etected conflict in hook set-POST-30samba-set.sh of Cluster r710cluster1. <=
br></div><div><br></div><div> &nbs=
p;<br>2014-Nov-17, 03:21 <br> &nbs=
p;<br>Detected conflict in hook stop-PRE-29CTDB-teardown.sh of Cluster r710=
cluster1. <br></div><div><br></div><div>  =
; <br>2014-Nov-17, 03:21 <br>  =
; <br>Detected conflict in hook add-brick-PRE-28Quota-enable-roo=
t-xattr-heal.sh of Cluster r710cluster1. <br></div><div><br></div><div>&nbs=
p; <br>2014-Nov-17, 03:21 <br>&nbs=
p; <br>Detected conflict in hook s=
et-POST-31ganesha-set.sh of Cluster r710cluster1. <br></div><div><br></div>=
<div> <br>2014-Nov-17, 03:21=
<br> <br>Detected conflict =
in hook start-POST-30samba-start.sh of Cluster r710cluster1. <br></div><div=
><br></div><div> <br>2014-No=
v-17, 03:21 <br> <br>Detecte=
d conflict in hook reset-POST-31ganesha-reset.sh of Cluster r710cluster1. <=
br></div><div><br></div><div> &nbs=
p;<br>2014-Nov-17, 03:21 <br> &nbs=
p;<br>Detected conflict in hook gsync-create-POST-56glusterd-geo-rep-create=
-post.sh of Cluster r710cluster1. <br></div><div><br></div><div><br></div><=
div><br></div><div>What does this mean? <br></div><div><br></div><div>The s=
ystem seems to be working. <br></div><div><br></div><div>Thanks: <br></div>=
<div><br></div><div>Tibor<br></div><div><br></div><div><br></div></div></bo=
dy></html>
------=_Part_3534596_2072970975.1416207306635--
10 years
(no subject)
by Harald Wolf
------4LN2OR9JATJU5K23NU8D9SLWFUMB8P
Content-Type: text/plain;
charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi,
is the hardware of a Host (oVirt Node/Hypervisor) responslible for the bes=
t possible computing power of the console users?
--=20
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesende=
t=2E
------4LN2OR9JATJU5K23NU8D9SLWFUMB8P
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,<br>
is the hardware of a Host (oVirt Node/Hypervisor) responslible for the bes=
t possible computing power of the console users?<br>
<br>
-- <br>
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesende=
t=2E
------4LN2OR9JATJU5K23NU8D9SLWFUMB8P--
10 years