Maintenance on the Mailing-Lists
by Marc Dequènes (Duck)
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--icFP63IMAedXNj60fBmf2k8l0p6AQh860
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Quack,
I'm working in the OSAS team and arrived recently. Yoroshiku onegai shima=
su.
With the oVirt infra team we're working on the Mailing-Lists. In the
past there was SPF problems leading to mails being classified as SPAM,
especially affecting GMail users. A workaround was made, but it's not
nice and history of this time was mostly lost.
I'm then going to make some changes which I believe would work, but just
in case I'm sending this message to the main Mailing-Lists so you can
check your SPAM box and break my head on #ovirt@freenode if it fails :-).=
Regards.
--icFP63IMAedXNj60fBmf2k8l0p6AQh860
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCgAGBQJXRpcnAAoJEFXp+fesHEQ/RfwP/RITPBFocGzccVi4W4eDVciS
UCKSbESd3qM0SBfy9dcvX6+XOUDYOGGHr1n8IolJUCgUwisGgyTrP1vSgFugg1Jm
pASltqVz6hhds6xxLVxOSdFsQmSW5EoPkgvzti0xryjlagIuS+zKb9gUtt48N2Tv
LdypjmMJeeu4VcSFJi5fKnE7kidOf8rLU0UlvG7GwPc6cByV/oFPD5aQB/iGu3on
5w9E9y8oZ1YYDdMOxmu+wX3bMxlQzm3GPl7vLMHX/M3IS4jolY2+tFLbmTcCIq0R
fFoY7kcCdvprtPSKwY7vDy8hu8+uir3ritH3joCZW1lCKifT2IWES4cI2wop8dEr
U6lJqetTW+pJLOVVWGTXoD1uKSjIthUBzFXYGwEtiPPrV33L9A/ZijySmVlwhCKn
mZ4gCwHB8GvksRV1wnNWFD+/xpETG184c67lh/LdPDo7BbUzoDAyIxJBaWy8dOxX
BhANF41/VxbK2TEdP9aZ5msB0vK4A/wVqh02a404EumLIJSyhxbjzYAaHyezkvYP
7N2rVyMeJq1elYEOpD/xKLSY8ZzkaD0PzDqhC3FQfypGMn03K755O632RJ/XnHok
WjHQ9QzkOxFQRFN3fnExGhWztDvb4NB4MwNXS/rZfikm4dxlHnnFDJsto/MgPi9L
NjDPpahBTYv9LxOYFHlw
=vadV
-----END PGP SIGNATURE-----
--icFP63IMAedXNj60fBmf2k8l0p6AQh860--
8 years, 6 months
On 3.6.6, tried doing a live VM storage migration... didn't work
by Christopher Cox
In our old 3.4 ovirt, I know I've migrated storage on live VMs and everything
seemed to work.
However on 3.6.6, I tried this and I saw the warning about moving storage on a
live VM (it wasn't doing much of anything) and I went ahead and migrated the
storage from one storage domain to another. But when it was through, even
though the VM was still alive, when I tried to write to a virtual disk that was
part of the move, it paused the VM saying there wasn't enough storage.
I could unpause the VM, but in a few seconds, with things writing to the virtual
disk, again it was paused with the same out of space message. Vdsm logs showed
the enospc return code... so it made sense, it's just that the VM shows plenty
of storage there. Once I rebooted the VM, everything went back to normal.
So is moving storage for a live VM not supported? I guess we got lucky in our
3.4 system (?)
8 years, 6 months
Cinder Snapshot Issues
by Kevin Hrpcek
Hello,
I'm running into a problem with live snapshots not working when using
cinder/ceph disks. There are different failures for including and not
including memory, but in each case cinder/ceph creates a new snapshot that
can be seen in cinder and ceph. When doing a memory/disk snapshot the VM
ends up in a paused state and I need to kill -9 the qemu process to be able
to boot the vm again. The engine seems to be losing connection with the
vdsm process on the VM host after freezing the guest's filesystems. The
guest never receives the thaw command and it fails in the logs. I am
pasting in some log snippets.
2016-04-12 19:24:58,851 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-27) [5c4493e] Ending command
'org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand' successfully.
2016-04-12 19:27:56,873 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-27) [4d97ca06] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM OVCL1A command failed:
Message timeout which can be caused by communication issues
2016-04-12 19:27:56,873 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(DefaultQuartzScheduler_Worker-27) [4d97ca06] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand' return value
'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=5022,
message=Message timeout which can be caused by communication issues]]'
2016-04-12 19:27:56,874 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(DefaultQuartzScheduler_Worker-27) [4d97ca06] HostName = OVCL1A
2016-04-12 19:27:56,874 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(DefaultQuartzScheduler_Worker-27) [4d97ca06] Command
'SnapshotVDSCommand(HostName = OVCL1A,
SnapshotVDSCommandParameters:{runAsync='true',
hostId='9bdfaedc-34a8-4a08-ad8a-c117835a6094',
vmId='040609f6-cfe0-4763-8b32-08ffad158c93'})' execution failed:
VDSGenericException: VDSNetworkException: Message timeout which can be
caused by communication issues
2016-04-12 19:27:56,875 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(org.ovirt.thread.pool-8-thread-16) [4d97ca06] Host 'OVCL1A' is not
responding.
Disk only live snapshots freeze the guest file systems, the vm receives the
thaw command, but the VM is no longer responsive. The VM pings on the
network but it is hung and it also needs a kill -9 to the qemu process so
that it can be booted again.
jsonrpc.Executor/0::DEBUG::2016-04-12
19:41:58,342::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'VM.snapshot' in bridge with {u'frozen': True, u'vmID':
u'040609f6-cfe0-4763-8b32-08ffad158c93', u'snapDrives': []}
jsonrpc.Executor/0::INFO::2016-04-12
19:41:58,343::vm::3237::virt.vm::(snapshot)
vmId=`040609f6-cfe0-4763-8b32-08ffad158c93`::<domainsnapshot>
<disks/>
</domainsnapshot>
jsonrpc.Executor/0::ERROR::2016-04-12
19:41:58,346::vm::3252::virt.vm::(snapshot)
vmId=`040609f6-cfe0-4763-8b32-08ffad158c93`::Unable to take snapshot
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 3250, in snapshot
self._dom.snapshotCreateXML(snapxml, snapFlags)
File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
124, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in
wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2581, in
snapshotCreateXML
if ret is None:raise libvirtError('virDomainSnapshotCreateXML()
failed', dom=self)
libvirtError: unsupported configuration: nothing selected for snapshot
jsonrpc.Executor/7::DEBUG::2016-04-12
19:41:58,391::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'VM.thaw' in bridge with {u'vmID': u'040609f6-cfe0-4763-8b32-08ffad158c93'}
jsonrpc.Executor/7::INFO::2016-04-12
19:41:58,391::vm::3041::virt.vm::(thaw)
vmId=`040609f6-cfe0-4763-8b32-08ffad158c93`::Thawing guest filesystems
jsonrpc.Executor/7::INFO::2016-04-12
19:41:58,396::vm::3056::virt.vm::(thaw)
vmId=`040609f6-cfe0-4763-8b32-08ffad158c93`::6 guest filesystems thawed
Everything else is working well with cinder for running VMs (making disks,
running VMs, live migration, etc...). I was able to get live snapshots when
using a CephFS Posix storage domain.
Versions..
Ceph 9.2.0
oVirt Latest
CentOS 7.2
Cinder 7.0.1-1.el7
Any help would be appreciated.
Thanks,
Kevin
8 years, 6 months
Re: [ovirt-users] How to setup host network to link ovirtmgmt with eth0 automatically?
by Ondřej Svoboda
This is a multi-part message in MIME format.
--------------C1E2D94265E0210FD5646D84
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Kai,
The symptoms after removal of vdsm-hook-ovs look just like the ovirtmgmt
bridged network was not created. The log to look in would be
supervdsm.log, but it looks like yours has been rotated after supervdsmd
was restarted (the warning is benign). Can you dig out "supervdsm.log.1"?
OVS started to be supported by the hook initially (this was the first
attempt at supporting OVS). Currently, proper OVS support is in
development and should appear in 4.0 as a tech-preview.
Thanks,
Ondra (osvoboda(a)redhat.com)
On 25.5.2016 11:01, Kai Kang wrote:
>
>
> On Tue, May 24, 2016 at 8:43 PM, Ondřej Svoboda <ondrej(a)svobodasoft.cz
> <mailto:ondrej@svobodasoft.cz>> wrote:
>
> Hi again, (correcting myself a bit)
>
> are using the Open vSwitch hook intentionally? If not, you can
> |yum remove vdsm-hook-ovs| for a while. What is your exact VDSM
> version?
>
>
> My VDSM version is 4.17.24 and ovirt-engine is 3.6.4. When after
> remove ovs hooks, it still fails with:
>
> "Host node3 does not comply with the cluster Default networks, the
> following networks are missing on host: 'ovirtmgmt'".
>
> I checked vdsm.log it seems no erros there. And in supervdsm.log, it
> shows warnings:
>
> sourceRoute::INFO::2016-05-25
> 07:57:41,779::sourceroutethread::60::root::(process_IN_CLOSE_WRITE_filePath)
> interface ovirtmgmt is not a libvirt interface
> sourceRoute::WARNING::2016-05-25
> 07:57:41,780::utils::140::root::(rmFile) File:
> /var/run/vdsm/trackedInterfaces/ovirtmgmt already removed
>
>
> What should I check next? Thanks.
>
> I uploaded vdsm.log to:
>
> https://gist.github.com/parr0tr1ver/1e38171b5d12cf77321101530276d368
>
> and supervdsm.log:
>
> https://gist.github.com/parr0tr1ver/97805698a485f1cd49ded2b095297531
>
>
> Also, can you give a little more context from your vdsm.log, and
> also from supervdsm.log? I think the vdsm.log failure is only
> related to stats reporting, and is not the root problem.
>
> If you don't have any confidential information in the logs (or you
> can remove it), I suggest that you open a bug on product=vdsm at
> https://bugzilla.redhat.com/ Have no fear about naming the bug, it
> can be renamed if necessary.
>
>
>
> I find there is a bug on
> https://bugzilla.redhat.com/show_bug.cgi?id=1234867. The "Target
> Milestone" to "support ovs via vdsm hook" is ovirt-3.6.7. Does that
> mean ovs does not work actually in previous versions?
>
> Thanks a lot.
>
> --Kai
>
>
>
> Thanks, Ondra
>
>> On 24.5.2016 11:47, Kai Kang wrote:
>>> Hi,
>>>
>>> I checked vdsm.log, it show error:
>>>
>>> jsonrpc.Executor/0::DEBUG::2016-05-24
>>> 09:46:04,056::utils::671::root::(execCmd) /usr/bin/taskset
>>> --cpu-list 0-7 /usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs
>>> (cwd None)
>>> jsonrpc.Executor/0::DEBUG::2016-05-24
>>> 09:46:04,136::utils::689::root::(execCmd) FAILED: <err> =
>>> 'Traceback (most recent call last):\n File
>>> "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line 72, in
>>> <module>\n main()\n File
>>> "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line 66, in
>>> main\n
>>> stats[\'network\'].update(ovs_networks_stats(stats[\'network\']))\nKeyError:
>>> \'network\'\n\n'; <rc> = 2
>>> jsonrpc.Executor/0::INFO::2016-05-24
>>> 09:46:04,136::hooks::98::root::(_runHooksDir) Traceback (most
>>> recent call last):
>>> File "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line
>>> 72, in <module>
>>> main()
>>> File "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line
>>> 66, in main
>>> stats['network'].update(ovs_networks_stats(stats['network']))
>>> KeyError: 'network'
>>>
>>>
>>> I'm checking what wrong with it.
>>>
>>> Thanks,
>>> Kai
>>>
>>>
>>>
>>> On Tue, May 24, 2016 at 3:57 PM, Kai Kang <kai.7.kang(a)gmail.com
>>> <mailto:kai.7.kang@gmail.com>> wrote:
>>>
>>> And network configurations on node:
>>>
>>> [root at ovirt-node] # brctl show
>>> bridge name bridge id STP enabled interfaces
>>> docker0 8000.0242ae1de711 no
>>> ovirtmgmt 8000.001320ff73aa no eth0
>>>
>>> [root at ovirt-node] # ifconfig -a
>>> docker0 Link encap:Ethernet HWaddr 02:42:ae:1d:e7:11
>>> inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
>>> UP BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>> eth0 Link encap:Ethernet HWaddr 00:13:20:ff:73:aa
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:6331 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:2866 errors:0 dropped:0 overruns:0
>>> carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:762182 (744.3 KiB) TX bytes:210611
>>> (205.6 KiB)
>>> Interrupt:20 Memory:d1100000-d1120000
>>>
>>> lo Link encap:Local Loopback
>>> inet addr:127.0.0.1 Mask:255.0.0.0
>>> inet6 addr: ::1/128 Scope:Host
>>> UP LOOPBACK RUNNING MTU:65536 Metric:1
>>> RX packets:36 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:2478 (2.4 KiB) TX bytes:2478 (2.4 KiB)
>>>
>>> ovirtmgmt Link encap:Ethernet HWaddr 00:13:20:ff:73:aa
>>> inet addr:128.224.165.170 Bcast:128.224.165.255
>>> Mask:255.255.255.0
>>> inet6 addr: fe80::213:20ff:feff:73aa/64 Scope:Link
>>> inet6 addr:
>>> 11:2233:4455:6677:213:20ff:feff:73aa/64 Scope:Global
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:6295 errors:0 dropped:6 overruns:0 frame:0
>>> TX packets:2831 errors:0 dropped:0 overruns:0
>>> carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:644421 (629.3 KiB) TX bytes:177616
>>> (173.4 KiB)
>>>
>>> sit0 Link encap:UNSPEC HWaddr
>>> 00-00-00-00-32-33-33-00-00-00-00-00-00-00-00-00
>>> NOARP MTU:1480 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>> wlan0 Link encap:Ethernet HWaddr 80:86:f2:8b:1d:cf
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>>
>>> Thanks,
>>> Kai
>>>
>>>
>>>
>>> On Tue, May 24, 2016 at 3:36 PM, Kai Kang
>>> <kai.7.kang(a)gmail.com <mailto:kai.7.kang@gmail.com>> wrote:
>>>
>>> Hi,
>>>
>>> When I install a host, it fails with:
>>>
>>> 2016-05-24 07:00:01,749 ERROR
>>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
>>> (DefaultQuartzScheduler_Worker-4) [1bf36cd4] Host
>>> 'node3' is set to Non-Operational, it is missing the
>>> following networks: 'ovirtmgmt'
>>> 2016-05-24 07:00:01,781 WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-4) [1bf36cd4] Correlation
>>> ID: 1bf36cd4, Job ID:
>>> db281e8f-67cc-441a-b44c-90b135e509bd, Call Stack: null,
>>> Custom Event ID: -1, Message: Host node3 does not comply
>>> with the cluster Default networks, the following
>>> networks are missing on host: 'ovirtmgmt'
>>>
>>> After I click the "Hosts" -> subtab "Network Interfaces"
>>> -> "Setup Host Networks" on the web ui and drag
>>> ovirtmgmt to "Assigned Logical" to link with eth0, then
>>> activate host "node3" successfully.
>>>
>>> My question is how to make such manual operation
>>> automatically? Then I can run some automatic tests.
>>>
>>> Thanks a lot.
>>>
>>> --Kai
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
--------------C1E2D94265E0210FD5646D84
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Kai,<br>
<br>
The symptoms after removal of vdsm-hook-ovs look just like the
ovirtmgmt bridged network was not created. The log to look in
would be supervdsm.log, but it looks like yours has been rotated
after supervdsmd was restarted (the warning is benign). Can you
dig out "supervdsm.log.1"?<br>
<br>
OVS started to be supported by the hook initially (this was the
first attempt at supporting OVS). Currently, proper OVS support is
in development and should appear in 4.0 as a tech-preview.<br>
<br>
Thanks,<br>
Ondra (<a class="moz-txt-link-abbreviated" href="mailto:osvoboda@redhat.com">osvoboda(a)redhat.com</a>)<br>
<br>
</p>
<div class="moz-cite-prefix">On 25.5.2016 11:01, Kai Kang wrote:<br>
</div>
<blockquote
cite="mid:CADC9KmpRk=ZhEt==Q5nWaKGdS-NGbibCJmSp01CbK1Cx=utP0w@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, May 24, 2016 at 8:43 PM,
Ondřej Svoboda <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:ondrej@svobodasoft.cz" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:ondrej@svobodasoft.cz">ondrej(a)svobodasoft.cz</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi again,
(correcting myself a bit)<br>
<br>
are using the Open vSwitch hook intentionally? If not,
you can <code>yum remove vdsm-hook-ovs</code> for a
while. What is your exact VDSM version?<br>
</div>
</blockquote>
<div><br>
</div>
<div>My VDSM version is 4.17.24 and ovirt-engine is 3.6.4.
When after remove ovs hooks, it still fails with:</div>
<div><br>
</div>
<div> "Host node3 does not comply with the cluster Default
networks, the following networks are missing on host:
'ovirtmgmt'".</div>
<div><br>
</div>
<div>I checked vdsm.log it seems no erros there. And in
supervdsm.log, it shows warnings:</div>
<div><br>
</div>
<div>
<div>sourceRoute::INFO::2016-05-25
07:57:41,779::sourceroutethread::60::root::(process_IN_CLOSE_WRITE_filePath)
interface ovirtmgmt is not a libvirt interface</div>
<div>sourceRoute::WARNING::2016-05-25
07:57:41,780::utils::140::root::(rmFile) File:
/var/run/vdsm/trackedInterfaces/ovirtmgmt already
removed</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>What should I check next? Thanks.</div>
<div><br>
</div>
<div>I uploaded vdsm.log to:</div>
<div><br>
</div>
<div><a moz-do-not-send="true"
href="https://gist.github.com/parr0tr1ver/1e38171b5d12cf77321101530276d368">https://gist.github.com/parr0tr1ver/1e38171b5d12cf77321101530276d368</a><br>
</div>
<div><br>
</div>
<div>and supervdsm.log:</div>
<div><br>
</div>
<div><a moz-do-not-send="true"
href="https://gist.github.com/parr0tr1ver/97805698a485f1cd49ded2b095297531">https://gist.github.com/parr0tr1ver/97805698a485f1cd49ded2b095297531</a><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> <br>
Also, can you give a little more context from your
vdsm.log, and also from supervdsm.log? I think the
vdsm.log failure is only related to stats reporting, and
is not the root problem.<br>
<br>
If you don't have any confidential information in the
logs (or you can remove it), I suggest that you open a
bug on product=vdsm at <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/" target="_blank">https://bugzilla.redhat.com/</a>
Have no fear about naming the bug, it can be renamed if
necessary.</div>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>I find there is a bug on <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1234867">https://bugzilla.redhat.com/show_bug.cgi?id=1234867</a>.
The "Target Milestone" to <span class="" style="white-space:pre">"support ovs via vdsm hook" is </span>ovirt-3.6.7.
Does that mean ovs does not work actually in previous
versions?</div>
<div><br>
</div>
<div>Thanks a lot.</div>
<div><br>
</div>
<div>--Kai</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>
<div class="h5"><br>
<br>
Thanks, Ondra<br>
<blockquote type="cite"> </blockquote>
<br>
<blockquote type="cite">
<div>On 24.5.2016 11:47, Kai Kang wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi,
<div><br>
</div>
<div>I checked vdsm.log, it show error:</div>
<div><br>
</div>
<div>
<div>jsonrpc.Executor/0::DEBUG::2016-05-24
09:46:04,056::utils::671::root::(execCmd)
/usr/bin/taskset --cpu-list 0-7
/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs
(cwd None)</div>
<div>jsonrpc.Executor/0::DEBUG::2016-05-24
09:46:04,136::utils::689::root::(execCmd)
FAILED: <err> = 'Traceback (most
recent call last):\n File
"/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs",
line 72, in <module>\n main()\n
File
"/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs",
line 66, in main\n
stats[\'network\'].update(ovs_networks_stats(stats[\'network\']))\nKeyError:
\'network\'\n\n'; <rc> = 2</div>
<div>jsonrpc.Executor/0::INFO::2016-05-24
09:46:04,136::hooks::98::root::(_runHooksDir)
Traceback (most recent call last):</div>
<div> File
"/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs",
line 72, in <module></div>
<div> main()</div>
<div> File
"/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs",
line 66, in main</div>
<div>
stats['network'].update(ovs_networks_stats(stats['network']))</div>
<div>KeyError: 'network'</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>I'm checking what wrong with it.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Kai</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, May 24, 2016
at 3:57 PM, Kai Kang <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:kai.7.kang@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:kai.7.kang@gmail.com">kai.7.kang(a)gmail.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div dir="ltr">And network configurations
on node:
<div><br>
</div>
<div>[root at ovirt-node] # brctl show<br>
</div>
<div>bridge name bridge id
STP enabled interfaces</div>
<div>docker0 8000.0242ae1de711
no</div>
<div>ovirtmgmt
8000.001320ff73aa no
eth0</div>
<div><br>
</div>
<div>[root at ovirt-node] # ifconfig -a<br>
</div>
<div>
<div>docker0 Link encap:Ethernet
HWaddr 02:42:ae:1d:e7:11</div>
<div> inet addr:172.17.0.1
Bcast:0.0.0.0 Mask:255.255.0.0</div>
<div> UP BROADCAST MULTICAST
MTU:1500 Metric:1</div>
<div> RX packets:0 errors:0
dropped:0 overruns:0 frame:0</div>
<div> TX packets:0 errors:0
dropped:0 overruns:0 carrier:0</div>
<div> collisions:0
txqueuelen:0</div>
<div> RX bytes:0 (0.0 B) TX
bytes:0 (0.0 B)</div>
<div><br>
</div>
<div>eth0 Link encap:Ethernet
HWaddr 00:13:20:ff:73:aa</div>
<div> UP BROADCAST RUNNING
MULTICAST MTU:1500 Metric:1</div>
<div> RX packets:6331
errors:0 dropped:0 overruns:0
frame:0</div>
<div> TX packets:2866
errors:0 dropped:0 overruns:0
carrier:0</div>
<div> collisions:0
txqueuelen:1000</div>
<div> RX bytes:762182 (744.3
KiB) TX bytes:210611 (205.6 KiB)</div>
<div> Interrupt:20
Memory:d1100000-d1120000</div>
<div><br>
</div>
<div>lo Link encap:Local
Loopback</div>
<div> inet addr:127.0.0.1
Mask:255.0.0.0</div>
<div> inet6 addr: ::1/128
Scope:Host</div>
<div> UP LOOPBACK RUNNING
MTU:65536 Metric:1</div>
<div> RX packets:36 errors:0
dropped:0 overruns:0 frame:0</div>
<div> TX packets:36 errors:0
dropped:0 overruns:0 carrier:0</div>
<div> collisions:0
txqueuelen:0</div>
<div> RX bytes:2478 (2.4 KiB)
TX bytes:2478 (2.4 KiB)</div>
<div><br>
</div>
<div>ovirtmgmt Link encap:Ethernet
HWaddr 00:13:20:ff:73:aa</div>
<div> inet
addr:128.224.165.170
Bcast:128.224.165.255
Mask:255.255.255.0</div>
<div> inet6 addr:
fe80::213:20ff:feff:73aa/64
Scope:Link</div>
<div> inet6 addr:
11:2233:4455:6677:213:20ff:feff:73aa/64
Scope:Global</div>
<div> UP BROADCAST RUNNING
MULTICAST MTU:1500 Metric:1</div>
<div> RX packets:6295
errors:0 dropped:6 overruns:0
frame:0</div>
<div> TX packets:2831
errors:0 dropped:0 overruns:0
carrier:0</div>
<div> collisions:0
txqueuelen:0</div>
<div> RX bytes:644421 (629.3
KiB) TX bytes:177616 (173.4 KiB)</div>
<div><br>
</div>
<div>sit0 Link encap:UNSPEC
HWaddr
00-00-00-00-32-33-33-00-00-00-00-00-00-00-00-00</div>
<div> NOARP MTU:1480
Metric:1</div>
<div> RX packets:0 errors:0
dropped:0 overruns:0 frame:0</div>
<div> TX packets:0 errors:0
dropped:0 overruns:0 carrier:0</div>
<div> collisions:0
txqueuelen:0</div>
<div> RX bytes:0 (0.0 B) TX
bytes:0 (0.0 B)</div>
<div><br>
</div>
<div>wlan0 Link encap:Ethernet
HWaddr 80:86:f2:8b:1d:cf</div>
<div> BROADCAST MULTICAST
MTU:1500 Metric:1</div>
<div> RX packets:0 errors:0
dropped:0 overruns:0 frame:0</div>
<div> TX packets:0 errors:0
dropped:0 overruns:0 carrier:0</div>
</div>
<div>
<div> collisions:0
txqueuelen:1000</div>
<div> RX bytes:0 (0.0 B) TX
bytes:0 (0.0 B)</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Kai</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, May
24, 2016 at 3:36 PM, Kai Kang <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:kai.7.kang@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:kai.7.kang@gmail.com">kai.7.kang(a)gmail.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hi,
<div><br>
</div>
<div>When I install a host, it
fails with:</div>
<div><br>
</div>
<div>
<div>2016-05-24 07:00:01,749
ERROR
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(DefaultQuartzScheduler_Worker-4) [1bf36cd4] Host 'node3' is set to
Non-Operational, it is
missing the following
networks: 'ovirtmgmt'</div>
<div>2016-05-24 07:00:01,781
WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-4) [1bf36cd4] Correlation ID: 1bf36cd4,
Job ID:
db281e8f-67cc-441a-b44c-90b135e509bd,
Call Stack: null, Custom
Event ID: -1, Message:
Host node3 does not comply
with the cluster Default
networks, the following
networks are missing on
host: 'ovirtmgmt'</div>
</div>
<div><br>
</div>
<div>After I click the "Hosts"
-> subtab "Network
Interfaces" -> "Setup
Host Networks" on the web ui
and drag ovirtmgmt to
"Assigned Logical" to link
with eth0, then activate
host "node3" successfully.</div>
<div><br>
</div>
<div>My question is how to
make such manual operation
automatically? Then I can
run some automatic tests.</div>
<div><br>
</div>
<div>Thanks a lot.</div>
<span><font color="#888888">
<div><br>
</div>
<div>--Kai</div>
</font></span></div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------C1E2D94265E0210FD5646D84--
8 years, 6 months
add user permissions to a template
by Nathanaël Blanchet
Hi all,
I don't want to add permission to manage a template, but I want to set
permission to this template so as to each new vm created from this
template inherits of the same permissions.
Many thanks.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years, 6 months
Cannot sync networks
by Roderick Mooi
Good day
On oVirt 3.6.4 HE. I manually reconfigured host networks by doing the
following:
1. Changed to global maintenance mode
2. Shutdown the hosted engine
3. on each host:
a. systemctl stop ovirt-ha-agent && systemctl stop ovirt-ha-broker &&
systemctl stop vdsmd
b. updated /var/lib/vdsm/persistence/netconf/nets to match the required
config
c. updated ifcfg files to match
d. rebooted each host
4. When all is up and running again, verified network connectivity and
settings on each host - ok.
5. Logged into engine web UI - all hosts detected but show network
out-of-sync (see attached example).
6. Individual sync / sync all networks runs for a while till login time
out. Log back in and go to host - still shows out-of-sync (even next day or
after rebooting again).
7. Eventually got one host to sync (not hosting any VMs and rebooted).
8. Whatever I try cannot get the other hosts to sync (even when moving all
VMs off and rebooting).
Any ideas? Do I have to manually edit the database and change the network
settings for the DC - if so, how do I do this? (the host config is what I
want - DC is old config.)
Alternatively, if I have to start all over again, what is the correct way
for changing networks post HE install (I need this as the network for
installation and testing looks different to the final production network)?
Thanks very much,
Roderick
8 years, 6 months
How to change host networks?
by Roderick Mooi
Good day
I would like to change the IP addresses of the host networks used by Ovirt
including the ovirtmgmt bridge. I need to move them to a different subnet
post installation. What is the best/safest way to do this?
Thanks,
Roderick Mooi
Senior Engineer: South African National Research Network (SANReN)
Meraka Institute, CSIR
roderick(a)sanren.ac.za | +27 12 841 4111 | www.sanren.ac.za
8 years, 6 months
Failure Backing up oVirt Manager 3.6.5
by Julián Tete
oVirt version = 3.6.5 on CentOS 7.2
I have Data Warehouse and Reports database
Method 1:
In the Manager machine:
engine-backup --mode=backup --file=/home/cnscadmin/engine-3.6.bck
--log=/home/cnscadmin/backup.log
less /home/cnscadmin/backup.log
2016-05-23 11:43:30 3585: Start of engine-backup mode backup scope all file
/home/cnscadmin/engine-3.6.bck
2016-05-23 11:43:30 3585: OUTPUT: Backing up:
2016-05-23 11:43:30 3585: Generating pgpass
2016-05-23 11:43:30 3585: OUTPUT: Notifying engine
2016-05-23 11:43:30 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('files', now(), 0,
'Started', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:30 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('db', now(), 0, 'Started', '
ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:30 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('dwhdb', now(), 0,
'Started', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:30 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('reportsdb', now(), 0,
'Started', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:30 3585: Creating temp folder
/tmp/engine-backup.UwsBOUPMGT/tar
2016-05-23 11:43:30 3585: OUTPUT: - Files
2016-05-23 11:43:30 3585: Backing up files to
/tmp/engine-backup.UwsBOUPMGT/tar/files
2016-05-23 11:43:30 3585: OUTPUT: - Engine database 'engine'
2016-05-23 11:43:30 3585: Backing up database to
/tmp/engine-backup.UwsBOUPMGT/tar/db/engine_backup.db
2016-05-23 11:43:30 3585: pg_cmd running: pg_dump -w -U engine -h localhost
-p 5432 engine -E UTF8 --disable-dollar-quoting --disable-triggers
--format=custom
2016-05-23 11:43:31 3585: OUTPUT: - DWH database 'ovirt_engine_history'
2016-05-23 11:43:31 3585: Backing up dwh database to
/tmp/engine-backup.UwsBOUPMGT/tar/db/dwh_backup.db
2016-05-23 11:43:31 3585: pg_cmd running: pg_dump -w -U
ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -E UTF8
--disable-dollar-quoting --disable-triggers --format=custom
2016-05-23 11:43:32 3585: OUTPUT: - Reports database 'ovirt_engine_reports'
2016-05-23 11:43:32 3585: Backing up reports database to
/tmp/engine-backup.UwsBOUPMGT/tar/db/reports_backup.db
2016-05-23 11:43:32 3585: pg_cmd running: pg_dump -w -U
ovirt_engine_reports -h localhost -p 5432 ovirt_engine_reports -E UTF8
--disable-dollar-quoting --disable-triggers --format=custom
2016-05-23 11:43:33 3585: Creating md5sum at
/tmp/engine-backup.UwsBOUPMGT/tar/md5sum
2016-05-23 11:43:33 3585: OUTPUT: Packing into file
'/home/cnscadmin/engine-3.6.bck'
2016-05-23 11:43:33 3585: Creating tarball /home/cnscadmin/engine-3.6.bck
2016-05-23 11:43:33 3585: OUTPUT: Notifying engine
2016-05-23 11:43:33 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('files', now(), 1,
'Finished', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:33 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('db', now(), 1, 'Finished', '
ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:33 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('dwhdb', now(), 1,
'Finished', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:33 3585: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c SELECT LogEngineBackupEvent('reportsdb', now(), 1,
'Finished', 'ovirt1.cnsc.net', '/home/cnscadmin/backup.log');
2016-05-23 11:43:33 3585: OUTPUT: Done.
In the Restore Machine:
yum -y install postgresql postgresql-server postgresql-contrib
postgresql-setup initdb
systemctl enable postgresql.service
systemctl start postgresql.service
less /var/lib/pgsql/data/pg_hba.conf
host engine engine 0.0.0.0/0 md5
host ovirt_engine_reports ovirt_engine_reports
0.0.0.0/0 md5
host ovirt_engine_history ovirt_engine_history
0.0.0.0/0 md5
host engine engine ::0/0 md5
host ovirt_engine_reports ovirt_engine_reports
::0/0 md5
host ovirt_engine_history ovirt_engine_history
::0/0 md5
less /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
firewall-cmd --permanent --add-service=postgresql
firewall-cmd --reload
engine-backup --mode=restore --file=/home/cnscadmin/engine-3.6.bck
--log=/home/cnscadmin/restore.log --provision-db --no-restore-permissions
less /home/cnscadmin/restore.log
2016-05-23 11:49:08 2970: Start of engine-backup mode restore scope all
file /home/cnscadmin/engine-3.6.bck
2016-05-23 11:49:08 2970: OUTPUT: Preparing to restore:
2016-05-23 11:49:08 2970: OUTPUT: - Unpacking file
'/home/cnscadmin/engine-3.6.bck'
2016-05-23 11:49:08 2970: Opening tarball /home/cnscadmin/engine-3.6.bck to
/tmp/engine-backup.gUyOwLF4yI
2016-05-23 11:49:08 2970: Verifying md5
2016-05-23 11:49:08 2970: Verifying version
2016-05-23 11:49:08 2970: Reading config
2016-05-23 11:49:08 2970: OUTPUT: Restoring:
2016-05-23 11:49:08 2970: OUTPUT: - Files
2016-05-23 11:49:08 2970: Restoring files
2016-05-23 11:49:08 2970: Reloading configuration
2016-05-23 11:49:08 2970: OUTPUT: Provisioning PostgreSQL users/databases:
2016-05-23 11:49:08 2970: provisionDB: user engine host localhost port 5432
database engine secured False secured_host_validation False
2016-05-23 11:49:08 2970: OUTPUT: - user 'engine', database 'engine'
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
'/etc/ovirt-engine-setup.conf.d/10-packaging-reports-jboss.conf',
'/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf',
'/tmp/engine-backup.gUyOwLF4yI/pg-provision-answer-file']
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20160523114908-ane5uv.log
Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
[ INFO ] Stage: Setup validation
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-provisiondb-20160523114908-ane5uv.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of provisiondb completed successfully
2016-05-23 11:49:12 2970: OUTPUT: Restoring:
2016-05-23 11:49:12 2970: Generating pgpass
2016-05-23 11:49:12 2970: Verifying connection
2016-05-23 11:49:12 2970: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -c select 1
?column?
----------
1
(1 row)
2016-05-23 11:49:12 2970: pg_cmd running: psql -w -U engine -h localhost -p
5432 engine -t -c show lc_messages
2016-05-23 11:49:12 2970: pg_cmd running: pg_dump -w -U engine -h localhost
-p 5432 engine -s
2016-05-23 11:49:12 2970: pg_cmd running: psql -w -U ovirt_engine_history
-h localhost -p 5432 ovirt_engine_history -c select 1
psql: FATAL: Ident authentication failed for user "ovirt_engine_history"
2016-05-23 11:49:12 2970: FATAL: Can't connect to database
'ovirt_engine_history'. Please see '/bin/engine-backup --help'.
Method 2:
In the Manager machine:
engine-backup --mode=backup --scope=all
--file=/home/cnscadmin/engine-3.6.tar.bz2 --log=/home/cnscadmin/backup.log
In The Restore Machine
engine-backup --mode=restore --file=/home/cnscadmin/engine-3.6.tar.bz2
--log=/home/cnscadmin/restore.log --change-db-credentials
--db-host=192.168.x.y --db-name=engine --db-user=engine
--db-password=MyPassword
Output:
Preparing to restore:
- Setting credentials for Engine database 'engine'
FATAL: Can't connect to database 'engine'. Please see '/bin/engine-backup
--help'.
less /home/cnscadmin/restore.log :
2016-05-23 13:47:36 2973: Start of engine-backup mode restore scope all
file /home/cnscadmin/engine-3.6.tar.bz2
2016-05-23 13:47:36 2973: OUTPUT: Preparing to restore:
2016-05-23 13:47:36 2973: OUTPUT: - Setting credentials for Engine database
'engine'
2016-05-23 13:47:36 2973: pg_cmd running: psql -w -U engine -h 192.168.x.y
-p 5432 engine -c select 1
psql: FATAL: password authentication failed for user "engine"
password retrieved from file "/tmp/engine-backup.dWpTSHqQxq/.pgpass"
2016-05-23 13:47:36 2973: FATAL: Can't connect to database 'engine'. Please
see '/bin/engine-backup --help'.
8 years, 6 months
gluster VM disk permissions
by Bill James
--------------030708060003050100080602
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
I'm sure I must have just missed something...
I just setup a new ovirt cluster with gluster & nfs data domains.
VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.
2016-05-17 14:14:51,959 ERROR [org.ovirt.engine.core.dal.dbbroker.audi
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message: VM
billj7-2.j2noc.com is down with error. Exit message: internal error:
process exited while connecting to monitor: 2016-05-17T21:14:51.162932Z
qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied
I did setup gluster permissions:
gluster volume set gv1 storage.owner-uid 36
gluster volume set gv1 storage.owner-gid 36
files look fine:
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah
total 2.0G
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta
I did check and vdsm user can read the file just fine.
*If I change mod disk to 666 VM starts up fine.*
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash
/etc/group:kvm:x:36:qemu,sanlock
ovirt-engine-3.6.4.1-1.el7.centos.noarch
glusterfs-3.7.11-1.el7.x86_64
I also set libvirt qemu user to root, for import-to-ovirt.pl script.
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user
/etc/libvirt/qemu.conf
user = "root"
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
features.shard: on
features.shard-block-size: 64MB
storage.owner-uid: 36
storage.owner-gid: 36
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
status gv1
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 2046
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 22532
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 59683
NFS Server on localhost 2049 0 Y 2200
Self-heal Daemon on localhost N/A N/A Y 2232
NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 65363
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 65371
NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 17621
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 17629
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
??
--------------030708060003050100080602
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I'm sure I must have just missed something...<br>
I just setup a new ovirt cluster with gluster & nfs data
domains.<br>
<br>
VMs on the NFS domain startup with no issues.<br>
VMs on the gluster domains complain of "Permission denied" on
startup.<br>
<br>
2016-05-17 14:14:51,959 ERROR
[org.ovirt.engine.core.dal.dbbroker.audi<br>
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) []
Correlation ID: null, Call Stack: null, Custom Event ID: -1,
Message: VM billj7-2.j2noc.com is down with error. Exit message:
internal error: process exited while connecting to monitor:
2016-05-17T21:14:51.162932Z qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied<br>
<br>
<br>
I did setup gluster permissions:<br>
gluster volume set gv1 storage.owner-uid 36<br>
gluster volume set gv1 storage.owner-gid 36<br>
<br>
files look fine:<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah<br>
total 2.0G<br>
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .<br>
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..<br>
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e<br>
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease<br>
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta<br>
<br>
I did check and vdsm user can read the file just fine.<br>
<b>If I change mod disk to 666 VM starts up fine.</b><br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group<br>
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash<br>
/etc/group:kvm:x:36:qemu,sanlock<br>
<br>
<br>
ovirt-engine-3.6.4.1-1.el7.centos.noarch<br>
glusterfs-3.7.11-1.el7.x86_64<br>
<br>
<br>
I also set libvirt qemu user to root, for import-to-ovirt.pl script.<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user
/etc/libvirt/qemu.conf <br>
user = "root"<br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume info gv1<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
features.shard: on<br>
features.shard-block-size: 64MB<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume status gv1<br>
Status of volume: gv1<br>
Gluster process TCP Port RDMA Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 2046 <br>
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 22532<br>
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 59683<br>
NFS Server on localhost 2049 0
Y 2200 <br>
Self-heal Daemon on localhost N/A N/A
Y 2232 <br>
NFS Server on ovirt3-gl.j2noc.com 2049 0
Y 65363<br>
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A
Y 65371<br>
NFS Server on ovirt2-gl.j2noc.com 2049 0
Y 17621<br>
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A
Y 17629<br>
<br>
Task Status of Volume gv1<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
<br>
<br>
??<br>
<br>
<br>
</body>
</html>
--------------030708060003050100080602--
8 years, 6 months