[ovirt-users] gluster VM disk permissions

Bill James bill.james at j2.com
Tue May 17 21:30:14 UTC 2016


I'm sure I must have just missed something...
I just setup a new ovirt cluster with gluster & nfs data domains.

VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.

2016-05-17 14:14:51,959 ERROR [org.ovirt.engine.core.dal.dbbroker.audi
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation 
ID: null, Call Stack: null, Custom Event ID: -1, Message: VM 
billj7-2.j2noc.com is down with error. Exit message: internal error: 
process exited while connecting to monitor: 2016-05-17T21:14:51.162932Z 
qemu-kvm: -drive 
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads: 
Could not open 
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e': 
Permission denied


I did setup gluster permissions:
gluster volume set gv1 storage.owner-uid 36
gluster volume set gv1 storage.owner-gid 36

files look fine:
[root at ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah
total 2.0G
drwxr-xr-x  2 vdsm kvm 4.0K May 17 09:39 .
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..
-rw-rw----  1 vdsm kvm  20G May 17 10:33 
a2b0a04d-041f-4342-9687-142cc641b35e
-rw-rw----  1 vdsm kvm 1.0M May 17 09:38 
a2b0a04d-041f-4342-9687-142cc641b35e.lease
-rw-r--r--  1 vdsm kvm  259 May 17 09:39 
a2b0a04d-041f-4342-9687-142cc641b35e.meta

I did check and vdsm user can read the file just fine.
*If I change mod disk to 666 VM starts up fine.*


[root at ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36 
/etc/passwd /etc/group
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash
/etc/group:kvm:x:36:qemu,sanlock


ovirt-engine-3.6.4.1-1.el7.centos.noarch
glusterfs-3.7.11-1.el7.x86_64


I also set libvirt qemu user to root, for import-to-ovirt.pl script.

[root at ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user 
/etc/libvirt/qemu.conf
user = "root"


[root at ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume 
info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
features.shard: on
features.shard-block-size: 64MB
storage.owner-uid: 36
storage.owner-gid: 36

[root at ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume 
status gv1
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port Online  Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1                                      49152     0 Y       2046
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1                                      49152     0 Y       22532
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1                                      49152     0 Y       59683
NFS Server on localhost                     2049      0 Y       2200
Self-heal Daemon on localhost               N/A       N/A Y       2232
NFS Server on ovirt3-gl.j2noc.com           2049      0 Y       65363
Self-heal Daemon on ovirt3-gl.j2noc.com     N/A       N/A Y       65371
NFS Server on ovirt2-gl.j2noc.com           2049      0 Y       17621
Self-heal Daemon on ovirt2-gl.j2noc.com     N/A       N/A Y       17629

Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks



??


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160517/0e3d81cc/attachment-0001.html>


More information about the Users mailing list