--------------060108010807030305050803
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
I tried posting this to ovirt-users list but got no response so I'll try
here too.
I just setup a new ovirt cluster with gluster & nfs data domains.
VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.
2016-05-17 14:14:51,959 ERROR [org.ovirt.engine.core.dal.dbbroker.audi
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message: VM
billj7-2.j2noc.com is down with error. Exit message: internal error:
process exited while connecting to monitor: 2016-05-17T21:14:51.162932Z
qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied
I did setup gluster permissions:
gluster volume set gv1 storage.owner-uid 36
gluster volume set gv1 storage.owner-gid 36
files look fine:
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah
total 2.0G
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta
I did check and vdsm user can read the file just fine.
*If I change mod disk to 666 VM starts up fine.*
ALso if I chgrp to qemu VM starts up fine.
[root@ovirt2 prod a7af2477-4a19-4f01-9de1-c939c99e53ad]# ls -l
253f9615-f111-45ca-bdce-cbc9e70406df
-rw-rw---- 1 vdsm qemu 21474836480 May 18 11:38
253f9615-f111-45ca-bdce-cbc9e70406df
Seems similar to issue here but that suggests it was fixed:
https://bugzilla.redhat.com/show_bug.cgi?id=1052114
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash
/etc/group:kvm:x:36:qemu,sanlock
ovirt-engine-3.6.4.1-1.el7.centos.noarch
glusterfs-3.7.11-1.el7.x86_64
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
I also set libvirt qemu user to root, for import-to-ovirt.pl script.
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user
/etc/libvirt/qemu.conf
user = "root"
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
features.shard: on
features.shard-block-size: 64MB
storage.owner-uid: 36
storage.owner-gid: 36
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
status gv1
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 2046
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 22532
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 59683
NFS Server on localhost 2049 0 Y 2200
Self-heal Daemon on localhost N/A N/A Y 2232
NFS Server on
ovirt3-gl.j2noc.com 2049 0 Y 65363
Self-heal Daemon on
ovirt3-gl.j2noc.com N/A N/A Y 65371
NFS Server on
ovirt2-gl.j2noc.com 2049 0 Y 17621
Self-heal Daemon on
ovirt2-gl.j2noc.com N/A N/A Y 17629
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
any ideas on why ovirt thinks it needs group of qemu??
--------------060108010807030305050803
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html;
charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I tried posting this to ovirt-users list but got no response so I'll
try here too.<br>
<br>
<div class="moz-text-html" lang="x-unicode"><br>
I just setup a new ovirt cluster with gluster & nfs data
domains.<br>
<br>
VMs on the NFS domain startup with no issues.<br>
VMs on the gluster domains complain of "Permission denied" on
startup.<br>
<br>
2016-05-17 14:14:51,959 ERROR
[org.ovirt.engine.core.dal.dbbroker.audi<br>
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) []
Correlation ID: null, Call Stack: null, Custom Event ID: -1,
Message: VM
billj7-2.j2noc.com is down with error. Exit message:
internal error: process exited while connecting to monitor:
2016-05-17T21:14:51.162932Z qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied<br>
<br>
<br>
I did setup gluster permissions:<br>
gluster volume set gv1 storage.owner-uid 36<br>
gluster volume set gv1 storage.owner-gid 36<br>
<br>
files look fine:<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah<br>
total 2.0G<br>
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .<br>
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..<br>
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e<br>
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease<br>
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta<br>
<br>
I did check and vdsm user can read the file just fine.<br>
<b>If I change mod disk to 666 VM starts up fine.</b><br>
ALso if I chgrp to qemu VM starts up fine.<br>
<br>
[root@ovirt2 prod a7af2477-4a19-4f01-9de1-c939c99e53ad]# ls -l
253f9615-f111-45ca-bdce-cbc9e70406df<br>
-rw-rw---- 1 vdsm qemu 21474836480 May 18 11:38
253f9615-f111-45ca-bdce-cbc9e70406df<br>
<br>
<br>
Seems similar to issue here but that suggests it was fixed: <br>
<a class="moz-txt-link-freetext"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1052114">h...
<br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group<br>
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash<br>
/etc/group:kvm:x:36:qemu,sanlock<br>
<br>
<br>
ovirt-engine-3.6.4.1-1.el7.centos.noarch<br>
glusterfs-3.7.11-1.el7.x86_64<br>
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64<br>
qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64<br>
libvirt-daemon-1.2.17-13.el7_2.4.x86_64<br>
<br>
<br>
I also set libvirt qemu user to root, for import-to-ovirt.pl
script.<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep
^user /etc/libvirt/qemu.conf <br>
user = "root"<br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume info gv1<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
features.shard: on<br>
features.shard-block-size: 64MB<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume status gv1<br>
Status of volume: gv1<br>
Gluster process TCP Port RDMA Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 2046 <br>
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 22532<br>
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 59683<br>
NFS Server on localhost 2049 0
Y 2200 <br>
Self-heal Daemon on localhost N/A N/A
Y 2232 <br>
NFS Server on ovirt3-gl.j2noc.com 2049 0
Y 65363<br>
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A
Y 65371<br>
NFS Server on ovirt2-gl.j2noc.com 2049 0
Y 17621<br>
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A
Y 17629<br>
<br>
Task Status of Volume gv1<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
<br>
<br>
any ideas on why ovirt thinks it needs group of qemu??<br>
<br>
<br>
</div>
<br>
</body>
</html>
--------------060108010807030305050803--