[Users] Adding new disk to Ovort node

Gabi C gabicr at gmail.com
Thu Dec 19 10:40:27 UTC 2013


I have also these in audit log


type=AVC msg=audit(1387273747.999:111): avc:  denied  { lock } for
pid=2299 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387273858.567:171): avc:  denied  { read } for
pid=2941 comm="login" name="btmp" dev="dm-8" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387273858.567:174): avc:  denied  { lock } for
pid=2941 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274101.999:229): avc:  denied  { lock } for
pid=3118 comm="login" path="/var/log/btmp" dev="dm-8" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274106.721:232): avc:  denied  { lock } for
pid=3118 comm="login" path="/var/log/btmp" dev="dm-8" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274111.958:239): avc:  denied  { read } for
pid=3118 comm="login" name="btmp" dev="dm-8" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274111.959:242): avc:  denied  { lock } for
pid=3118 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274151.848:253): avc:  denied  { sigchld } for
pid=3158 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274151.949:259): avc:  denied  { dyntransition }
for  pid=3158 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387274156.584:280): avc:  denied  { open } for
pid=3181 comm="agetty" path="/var/log/wtmp" dev="dm-8" ino=38
scontext=system_u:system_r:getty_t:s0
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274195.492:289): avc:  denied  { sigchld } for
pid=3183 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274268.624:341): avc:  denied  { sigchld } for
pid=3623 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274268.723:347): avc:  denied  { dyntransition }
for  pid=3623 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387274398.240:420): avc:  denied  { sigchld } for
pid=4440 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274398.341:426): avc:  denied  { dyntransition }
for  pid=4440 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387274400.470:457): avc:  denied  { sigchld } for
pid=4462 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274400.566:463): avc:  denied  { dyntransition }
for  pid=4462 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387274401.854:481): avc:  denied  { open } for
pid=4501 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387274559.404:571): avc:  denied  { sigchld } for
pid=5533 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387274559.505:577): avc:  denied  { dyntransition }
for  pid=5533 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387275001.263:602): avc:  denied  { open } for
pid=6237 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387275601.308:654): avc:  denied  { open } for
pid=7183 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387276201.380:855): avc:  denied  { open } for
pid=8568 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387276801.433:948): avc:  denied  { open } for
pid=9670 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387277401.474:977): avc:  denied  { open } for
pid=10390 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387277646.293:1044): avc:  denied  { sigchld } for
pid=10911 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387277646.397:1050): avc:  denied  { dyntransition }
for  pid=10911 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387277708.026:1080): avc:  denied  { sigchld } for
pid=11024 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387277708.128:1091): avc:  denied  { dyntransition }
for  pid=11032 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387278001.552:1168): avc:  denied  { open } for
pid=11653 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387278601.647:1198): avc:  denied  { open } for
pid=12577 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387278937.733:1209): avc:  denied  { sigchld } for
pid=13127 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387278937.840:1215): avc:  denied  { dyntransition }
for  pid=13127 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387279201.695:1234): avc:  denied  { open } for
pid=13580 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387282201.945:1348): avc:  denied  { open } for
pid=18142 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387283170.033:1379): avc:  denied  { open } for
pid=19723 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35"
dev="fuse" ino=13067829551237403768
scontext=system_u:system_r:svirt_t:s0:c107,c132
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387283170.034:1380): avc:  denied  { getattr } for
pid=19723 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c107,c132
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387283319.223:1394): avc:  denied  { open } for
pid=20229 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34"
dev="fuse" ino=9611908447529383451
scontext=system_u:system_r:svirt_t:s0:c765,c915
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387283319.224:1395): avc:  denied  { getattr } for
pid=20229 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c765,c915
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387283401.033:1412): avc:  denied  { open } for
pid=20456 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387284001.078:1440): avc:  denied  { open } for
pid=21704 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387284419.112:1460): avc:  denied  { open } for
pid=22657 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35"
dev="fuse" ino=13067829551237403768
scontext=system_u:system_r:svirt_t:s0:c93,c328
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387284419.113:1461): avc:  denied  { getattr } for
pid=22657 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c93,c328
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387284498.390:1482): avc:  denied  { open } for
pid=23020 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34"
dev="fuse" ino=9611908447529383451
scontext=system_u:system_r:svirt_t:s0:c533,c863
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387284498.391:1483): avc:  denied  { getattr } for
pid=23020 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c533,c863
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387284510.039:1496): avc:  denied  { open } for
pid=23180 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35"
dev="fuse" ino=13067829551237403768
scontext=system_u:system_r:svirt_t:s0:c377,c700
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387284510.040:1497): avc:  denied  { getattr } for
pid=23180 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c377,c700
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387284601.124:1513): avc:  denied  { open } for
pid=23506 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387285201.193:1544): avc:  denied  { open } for
pid=24722 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387285390.572:1562): avc:  denied  { open } for
pid=25215 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34"
dev="fuse" ino=9611908447529383451
scontext=system_u:system_r:svirt_t:s0:c165,c765
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387285390.573:1563): avc:  denied  { getattr } for
pid=25215 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c165,c765
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387285801.297:1595): avc:  denied  { open } for
pid=26199 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387286176.904:1606): avc:  denied  { open } for
pid=27341 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35"
dev="fuse" ino=13067829551237403768
scontext=system_u:system_r:svirt_t:s0:c288,c302
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387286176.906:1607): avc:  denied  { getattr } for
pid=27341 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c288,c302
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387286254.901:1628): avc:  denied  { open } for
pid=27764 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34"
dev="fuse" ino=9611908447529383451
scontext=system_u:system_r:svirt_t:s0:c786,c947
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387286254.902:1629): avc:  denied  { getattr } for
pid=27764 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c786,c947
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387286401.342:1646): avc:  denied  { open } for
pid=28176 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387287001.404:1669): avc:  denied  { open } for
pid=29599 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387288180.517:1708): avc:  denied  { open } for
pid=32133 comm="qemu-system-x86"
path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34"
dev="fuse" ino=9611908447529383451
scontext=system_u:system_r:svirt_t:s0:c185,c292
tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=AVC msg=audit(1387288180.518:1709): avc:  denied  { getattr } for
pid=32133 comm="qemu-system-x86" name="/" dev="fuse" ino=1
scontext=system_u:system_r:svirt_t:s0:c185,c292
tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem
type=AVC msg=audit(1387288201.491:1725): avc:  denied  { open } for
pid=32226 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387289401.633:1777): avc:  denied  { open } for
pid=2364 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387293761.064:1921): avc:  denied  { sigchld } for
pid=11668 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387293761.168:1927): avc:  denied  { dyntransition }
for  pid=11668 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387294201.473:1953): avc:  denied  { open } for
pid=12683 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387294801.508:1975): avc:  denied  { open } for
pid=13953 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387297201.633:2059): avc:  denied  { open } for
pid=19295 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387312801.587:2580): avc:  denied  { open } for
pid=18968 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29
scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387352472.746:3921): avc:  denied  { sigchld } for
pid=942 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387352472.853:3932): avc:  denied  { dyntransition }
for  pid=954 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387353052.756:3962): avc:  denied  { execute } for
pid=2415 comm="glusterd" name="tune2fs" dev="dm-5" ino=2748
scontext=system_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:fsadm_exec_t:s0 tclass=file
type=AVC msg=audit(1387353052.756:3962): avc:  denied  { execute_no_trans }
for  pid=2415 comm="glusterd" path="/usr/sbin/tune2fs" dev="dm-5" ino=2748
scontext=system_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:fsadm_exec_t:s0 tclass=file
type=AVC msg=audit(1387353052.762:3963): avc:  denied  { read } for
pid=2415 comm="tune2fs" name="dm-9" dev="devtmpfs" ino=16827
scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file
type=AVC msg=audit(1387353052.762:3963): avc:  denied  { open } for
pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827
scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file
type=AVC msg=audit(1387353052.762:3964): avc:  denied  { getattr } for
pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827
scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file
type=AVC msg=audit(1387353052.762:3965): avc:  denied  { ioctl } for
pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827
scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file
type=AVC msg=audit(1387355983.201:105): avc:  denied  { read } for
pid=3066 comm="login" name="btmp" dev="dm-9" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387355983.201:108): avc:  denied  { lock } for
pid=3066 comm="login" path="/var/log/wtmp" dev="dm-9" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387356016.592:152): avc:  denied  { sigchld } for
pid=3176 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387356016.699:158): avc:  denied  { dyntransition }
for  pid=3176 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387356040.436:179): avc:  denied  { open } for
pid=3206 comm="agetty" path="/var/log/wtmp" dev="dm-9" ino=38
scontext=system_u:system_r:getty_t:s0
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387359059.970:308): avc:  denied  { sigchld } for
pid=3931 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387359060.074:319): avc:  denied  { dyntransition }
for  pid=3935 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387365717.209:5806): avc:  denied  { sigchld } for
pid=29945 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387365717.315:5817): avc:  denied  { dyntransition }
for  pid=29963 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387365750.532:5873): avc:  denied  { relabelfrom } for
pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023"
dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:file_t:s0 tclass=file
type=AVC msg=audit(1387365750.532:5873): avc:  denied  { relabelto } for
pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023"
dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0
tcontext=system_u:object_r:file_t:s0 tclass=file
type=AVC msg=audit(1387440663.583:9021): avc:  denied  { sigchld } for
pid=18342 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387440663.687:9032): avc:  denied  { dyntransition }
for  pid=18349 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=AVC msg=audit(1387446189.047:315): avc:  denied  { lock } for
pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446196.317:318): avc:  denied  { lock } for
pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446199.539:321): avc:  denied  { lock } for
pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446225.129:332): avc:  denied  { lock } for
pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446232.941:335): avc:  denied  { lock } for
pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446246.007:338): avc:  denied  { lock } for
pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446260.743:346): avc:  denied  { lock } for
pid=11220 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446266.925:353): avc:  denied  { read } for
pid=11220 comm="login" name="btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446266.926:356): avc:  denied  { lock } for
pid=11220 comm="login" path="/var/log/wtmp" dev="dm-10" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387446579.597:402): avc:  denied  { open } for
pid=11808 comm="agetty" path="/var/log/wtmp" dev="dm-10" ino=38
scontext=system_u:system_r:getty_t:s0
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387447631.139:154): avc:  denied  { lock } for
pid=3085 comm="login" path="/var/log/btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387447636.069:161): avc:  denied  { read } for
pid=3085 comm="login" name="btmp" dev="dm-10" ino=15
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387447636.069:164): avc:  denied  { lock } for
pid=3085 comm="login" path="/var/log/wtmp" dev="dm-10" ino=38
scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387447683.198:173): avc:  denied  { open } for
pid=4619 comm="agetty" path="/var/log/wtmp" dev="dm-10" ino=38
scontext=system_u:system_r:getty_t:s0
tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=AVC msg=audit(1387448298.423:215): avc:  denied  { sigchld } for
pid=5571 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0
tcontext=system_u:system_r:initrc_t:s0 tclass=process
type=AVC msg=audit(1387448298.529:226): avc:  denied  { dyntransition }
for  pid=5578 comm="sshd" scontext=system_u:system_r:initrc_t:s0
tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process



On Thu, Dec 19, 2013 at 12:25 PM, Gabi C <gabicr at gmail.com> wrote:

> Hello again!
>
> After persisting selinux config, at reboot I get  "Curerent mode:
> enforced"" although ""Mode from config file: permissive"" !
> Due to this, i think I get an denied for glusterfsd:
>
> type=AVC msg=audit(1387365750.532:5873): avc:  denied  { relabelfrom }
> for  pid=30249 comm="glusterfsd"
> name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015
> scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:object_r:file_t:s0 tclass=file
>
>
>
>
>
>
>
> On Wed, Dec 18, 2013 at 3:30 PM, Fabian Deutsch <fabiand at redhat.com>wrote:
>
>> Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
>> > Still, now I cannot start none of the 2 machines! I get
>> >
>> > ID 119 VM proxy2 is down. Exit message: Child quit during startup
>> > handshake: Input/output error.""
>>
>> Could you try ot find out in what context this IO error appears?
>>
>> - fabian
>>
>> >
>> > Something similar to bug
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1033064, except that in my
>> > case selinux is permissive!
>> >
>> >
>> >
>> > On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr at gmail.com> wrote:
>> >         in my case $brick_path =/data
>> >
>> >
>> >         getfattr -d /data return NOTHING on both nodes!!!
>> >
>> >
>> >
>> >
>> >         On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch
>> >         <fabiand at redhat.com> wrote:
>> >                 Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi
>> >                 C:
>> >                 > Update on Glusterfs issue
>> >                 >
>> >                 >
>> >                 > I manage to recover lost volume after recretaing the
>> >                 same volume name
>> >                 > with same bricks, whisch raised an error message,
>> >                 resolved by, on both
>> >                 > nodes:
>> >                 >
>> >                 > setfattr -x trusted.glusterfs.volume-id $brick_path
>> >                 > setfattr -x trusted.gfid $brick_path
>> >
>> >
>> >                 Hey,
>> >
>> >                 good that you could recover them.
>> >
>> >                 Could you please provide $brick_path and getfattr -d
>> >                 $brick_path
>> >
>> >                 The question is if and/or why the fattrs are not
>> >                 stored.
>> >
>> >                 - fabian
>> >
>> >                 >
>> >                 >
>> >                 >
>> >                 > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C
>> >                 <gabicr at gmail.com> wrote:
>> >                 >         node 1:
>> >                 >
>> >                 >         [root at virtual5 admin]# cat /config/files
>> >                 >         /etc/fstab
>> >                 >         /etc/shadow
>> >                 >         /etc/default/ovirt
>> >                 >         /etc/ssh/ssh_host_key
>> >                 >         /etc/ssh/ssh_host_key.pub
>> >                 >         /etc/ssh/ssh_host_dsa_key
>> >                 >         /etc/ssh/ssh_host_dsa_key.pub
>> >                 >         /etc/ssh/ssh_host_rsa_key
>> >                 >         /etc/ssh/ssh_host_rsa_key.pub
>> >                 >         /etc/rsyslog.conf
>> >                 >         /etc/libvirt/libvirtd.conf
>> >                 >         /etc/libvirt/passwd.db
>> >                 >         /etc/passwd
>> >                 >         /etc/sysconfig/network
>> >                 >         /etc/collectd.conf
>> >                 >         /etc/libvirt/qemu/networks
>> >                 >         /etc/ssh/sshd_config
>> >                 >         /etc/pki
>> >                 >         /etc/logrotate.d/ovirt-node
>> >                 >         /var/lib/random-seed
>> >                 >         /etc/iscsi/initiatorname.iscsi
>> >                 >         /etc/libvirt/qemu.conf
>> >                 >         /etc/sysconfig/libvirtd
>> >                 >         /etc/logrotate.d/libvirtd
>> >                 >         /etc/multipath.conf
>> >                 >         /etc/hosts
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-enp3s0
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-lo
>> >                 >         /etc/ntp.conf
>> >                 >         /etc/shadow
>> >                 >         /etc/vdsm-reg/vdsm-reg.conf
>> >                 >         /etc/shadow
>> >                 >         /etc/shadow
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>> >                 >
>> >                   /etc/sysconfig/network-scripts/route-ovirtmgmt
>> >                 >
>> >                   /etc/sysconfig/network-scripts/rule-ovirtmgmt
>> >                 >         /root/.ssh/authorized_keys
>> >                 >         /etc/vdsm/vdsm.id
>> >                 >         /etc/udev/rules.d/12-ovirt-iosched.rules
>> >                 >         /etc/vdsm/vdsm.conf
>> >                 >         /etc/sysconfig/iptables
>> >                 >         /etc/resolv.conf
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-enp6s0
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-enp6s0.50
>> >                 >         /etc/glusterfs/glusterd.vol
>> >                 >         /etc/selinux/config
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >         node 2:
>> >                 >
>> >                 >
>> >                 >         [root at virtual4 ~]# cat /config/files
>> >                 >         /etc/fstab
>> >                 >         /etc/shadow
>> >                 >         /etc/default/ovirt
>> >                 >         /etc/ssh/ssh_host_key
>> >                 >         /etc/ssh/ssh_host_key.pub
>> >                 >         /etc/ssh/ssh_host_dsa_key
>> >                 >         /etc/ssh/ssh_host_dsa_key.pub
>> >                 >         /etc/ssh/ssh_host_rsa_key
>> >                 >         /etc/ssh/ssh_host_rsa_key.pub
>> >                 >         /etc/rsyslog.conf
>> >                 >         /etc/libvirt/libvirtd.conf
>> >                 >         /etc/libvirt/passwd.db
>> >                 >         /etc/passwd
>> >                 >         /etc/sysconfig/network
>> >                 >         /etc/collectd.conf
>> >                 >         /etc/libvirt/qemu/networks
>> >                 >         /etc/ssh/sshd_config
>> >                 >         /etc/pki
>> >                 >         /etc/logrotate.d/ovirt-node
>> >                 >         /var/lib/random-seed
>> >                 >         /etc/iscsi/initiatorname.iscsi
>> >                 >         /etc/libvirt/qemu.conf
>> >                 >         /etc/sysconfig/libvirtd
>> >                 >         /etc/logrotate.d/libvirtd
>> >                 >         /etc/multipath.conf
>> >                 >         /etc/hosts
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-enp3s0
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-lo
>> >                 >         /etc/shadow
>> >                 >         /etc/shadow
>> >                 >         /etc/vdsm-reg/vdsm-reg.conf
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>> >                 >
>> >                   /etc/sysconfig/network-scripts/route-ovirtmgmt
>> >                 >
>> >                   /etc/sysconfig/network-scripts/rule-ovirtmgmt
>> >                 >         /root/.ssh/authorized_keys
>> >                 >         /etc/shadow
>> >                 >         /etc/shadow
>> >                 >         /etc/vdsm/vdsm.id
>> >                 >         /etc/udev/rules.d/12-ovirt-iosched.rules
>> >                 >         /etc/sysconfig/iptables
>> >                 >         /etc/vdsm/vdsm.conf
>> >                 >         /etc/shadow
>> >                 >         /etc/resolv.conf
>> >                 >         /etc/ntp.conf
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY
>> >                 >         /etc/sysconfig/network-scripts/ifcfg-enp6s0
>> >                 >
>> >                   /etc/sysconfig/network-scripts/ifcfg-enp6s0.50
>> >                 >         /etc/glusterfs/glusterd.vol
>> >                 >         /etc/selinux/config
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >         On Wed, Dec 18, 2013 at 12:07 PM, Fabian
>> >                 Deutsch
>> >                 >         <fabiand at redhat.com> wrote:
>> >                 >                 Am Mittwoch, den 18.12.2013, 12:03
>> >                 +0200 schrieb Gabi
>> >                 >                 C:
>> >                 >                 > So here it is:
>> >                 >                 >
>> >                 >                 >
>> >                 >                 > in tab volumes add new volume -
>> >                 Replicated, then
>> >                 >                 added storage -
>> >                 >                 > data/glusterfs. Then I impoerted
>> >                 Vm, ran them and at
>> >                 >                 some point,
>> >                 >                 > needing some space for a Redhat
>> >                 Satellite  instance
>> >                 >                 I decided to put
>> >                 >                 > both node in maintenace stop them
>> >                 add new disk
>> >                 >                 devices and restart,
>> >                 >                 > but after restart the gluster
>> >                 volume defined under
>> >                 >                 Volumes Tab
>> >                 >                 > vanished!
>> >                 >
>> >                 >
>> >                 >                 Antoni,
>> >                 >
>> >                 >                 can you tell what log files to look
>> >                 at to find out why
>> >                 >                 that storage
>> >                 >                 domain vanished - from a Engine
>> >                 side?
>> >                 >
>> >                 >                 And do you know what files related
>> >                 to gluster are
>> >                 >                 changed on the Node
>> >                 >                 side?
>> >                 >
>> >                 >                 Gabi,
>> >                 >
>> >                 >                 could you please provide the
>> >                 contents of /config/files
>> >                 >                 on the Node.
>> >                 >
>> >                 >                 > Glusterfs data goes under /data
>> >                 directory which was
>> >                 >                 automatically
>> >                 >                 > configured when I installed the
>> >                 node.
>> >                 >
>> >                 >
>> >                 >                 Yep, /data is on the Data LV - that
>> >                 should be good.
>> >                 >
>> >                 >                 - fabian
>> >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 > On Wed, Dec 18, 2013 at 11:45 AM,
>> >                 Fabian Deutsch
>> >                 >                 <fabiand at redhat.com>
>> >                 >                 > wrote:
>> >                 >                 >         Am Mittwoch, den
>> >                 18.12.2013, 11:42 +0200
>> >                 >                 schrieb Gabi C:
>> >                 >                 >         > Yes, it is the VM
>> >                 part..I just run into an
>> >                 >                 issue.  My setup
>> >                 >                 >         consist in
>> >                 >                 >         > 2 nodes with glusterfs
>> >                 and after adding
>> >                 >                 supplemental hard
>> >                 >                 >         disk, after
>> >                 >                 >         > reboot I've lost
>> >                 glusterfs volumes!
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >         Could you exactly explain
>> >                 what you
>> >                 >                 configured?
>> >                 >                 >
>> >                 >                 >         >
>> >                 >                 >         > How can I persist any
>> >                 configuration on
>> >                 >                 node and I refer here
>> >                 >                 >         to
>> >                 >                 >         > ''setenforce 0'' - for
>> >                 ssh login to work-
>> >                 >                 and further
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >         How changes can be
>> >                 persisted on Node can be
>> >                 >                 found here:
>> >                 >                 >
>> >                 >
>> >
>> http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
>> >                 >                 >
>> >                 >                 >         Do you know into what path
>> >                 the glusterfs
>> >                 >                 data goes? Or is it
>> >                 >                 >         written
>> >                 >                 >         directly onto a disk/LV?
>> >                 >                 >
>> >                 >                 >         - fabian
>> >                 >                 >
>> >                 >                 >         > ""
>> >                 >
>> >                 http://www.ovirt.org/Features/GlusterFS_Storage_Domain
>> >                 >                 >         >       * option
>> >                 rpc-auth-allow-insecure on
>> >                 >                 ==> in
>> >                 >                 >         glusterd.vol (ensure
>> >                 >                 >         >         u restart
>> >                 glusterd service... for
>> >                 >                 this to take
>> >                 >                 >         effect)
>> >                 >                 >
>> >                 >                 >         >       * volume set
>> >                 <volname>
>> >                 >                 server.allow-insecure on ==>
>> >                 >                 >         (ensure u
>> >                 >                 >         >         stop and start
>> >                 the volume.. for
>> >                 >                 this to take
>> >                 >                 >         effect)''
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         > Thanks!
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         > On Wed, Dec 18, 2013 at
>> >                 11:35 AM, Fabian
>> >                 >                 Deutsch
>> >                 >                 >         <fabiand at redhat.com>
>> >                 >                 >         > wrote:
>> >                 >                 >         >         Am Mittwoch, den
>> >                 18.12.2013, 08:34
>> >                 >                 +0200 schrieb
>> >                 >                 >         Gabi C:
>> >                 >                 >         >         > Hello!
>> >                 >                 >         >         >
>> >                 >                 >         >         >
>> >                 >                 >         >         > In order to
>> >                 increase disk space
>> >                 >                 I want to add a
>> >                 >                 >         new disk
>> >                 >                 >         >         drive to
>> >                 >                 >         >         > ovirt node.
>> >                 After adding this
>> >                 >                 should I proceed as
>> >                 >                 >         "normal" -
>> >                 >                 >         >         pvcreate,
>> >                 >                 >         >         > vgcreate,
>> >                 lvcreate and so on -
>> >                 >                 or these
>> >                 >                 >         configuration will
>> >                 >                 >         >         not
>> >                 >                 >         >         > persist?
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         >         Hey Gabi,
>> >                 >                 >         >
>> >                 >                 >         >         basically plain
>> >                 LVM is used in
>> >                 >                 Node - so yes
>> >                 >                 >         pvcreate and
>> >                 >                 >         >         lvextend can
>> >                 >                 >         >         be used.
>> >                 >                 >         >         What storage
>> >                 part do you want to
>> >                 >                 extend? The part
>> >                 >                 >         where the
>> >                 >                 >         >         VMs reside?
>> >                 >                 >         >         You will also
>> >                 need to take care to
>> >                 >                 extend the
>> >                 >                 >         filesystem.
>> >                 >                 >         >
>> >                 >                 >         >         - fabian
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >         >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >                 >
>> >
>> >
>> >
>> >
>> >
>> >
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20131219/73b9f75c/attachment-0001.html>


More information about the Users mailing list