[ovirt-users] Gluster Data domain not correctly setup at boot
Nir Soffer
nsoffer at redhat.com
Mon Jan 4 17:27:50 UTC 2016
On Mon, Jan 4, 2016 at 6:36 PM, Stefano Danzi <s.danzi at hawai.it> wrote:
> I have one testing host (only one) with hosted engine and a gluster data
> domain (in the same machine).
>
> When I start the host and the engine I can see data domain active "up and
> green" but in the event list I get:
>
> - Invalid status on Data Center Default. Setting status to Non Responsive
> - Storage Domain Data (Data Center Default) was deactivated by system
> because it's not visible by any of the hosts
>
> If I try tu start a VM I get:
>
> - Failed to run onDalesSRV on Host ovirt01
> - VM onSalesSRV is down with error. Exit message: Cannot access storage file
> '/rhev/data-center/00000002-0002-0002-0002-0000000001ef/f739b27a-35bf-49c7-a95b-a92ec5c10320/images......
>
> The gluster volume is correctly mounted:
>
> [root at ovirt01 ~]# df -h
> File system Dim. Usati Dispon. Uso%
> Montato su
> /dev/mapper/centos_ovirt01-root 50G 18G 33G 35% /
> devtmpfs 7,8G 0 7,8G 0% /dev
> tmpfs 7,8G 0 7,8G 0%
> /dev/shm
> tmpfs 7,8G 17M 7,8G 1% /run
> tmpfs 7,8G 0 7,8G 0%
> /sys/fs/cgroup
> /dev/mapper/centos_ovirt01-home 10G 1,3G 8,8G 13% /home
> /dev/mapper/centos_ovirt01-glusterOVEngine 50G 11G 40G 22%
> /home/glusterfs/engine
> /dev/md0 494M 244M 251M 50% /boot
> /dev/mapper/centos_ovirt01-glusterOVData 500G 135G 366G 27%
> /home/glusterfs/data
> ovirt01.hawai.lan:/engine 50G 11G 40G 22%
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine
> tmpfs 1,6G 0 1,6G 0%
> /run/user/0
> ovirtbk-sheng.hawai.lan:/var/lib/exports/iso 22G 7,6G 15G 35%
> /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso
> ovirt01.hawai.lan:/data 500G 135G 366G 27%
> /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data
>
> But link on '/rhev/data-center/0....' is missing:
>
> [root at ovirt01 ~]# ls -la
> /rhev/data-center/00000002-0002-0002-0002-0000000001ef/
> totale 0
> drwxr-xr-x. 2 vdsm kvm 64 4 gen 14.31 .
> drwxr-xr-x. 4 vdsm kvm 59 4 gen 14.31 ..
> lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7
> ->
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
> lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd ->
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
>
> If I put data domain on maintenace mode and reactivate it I can run the VMs.
> Mounted fs are the same, but now I have links into /rhev/data-center/ :
>
> [root at ovirt01 ~]# ls -la
> /rhev/data-center/00000002-0002-0002-0002-0000000001ef/
> totale 4
> drwxr-xr-x. 2 vdsm kvm 4096 4 gen 17.10 .
> drwxr-xr-x. 4 vdsm kvm 59 4 gen 17.10 ..
> lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31
> 46f55a31-f35f-465c-b3e2-df45c05e06a7 ->
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
> lrwxrwxrwx. 1 vdsm kvm 103 4 gen 17.10
> 8cccc37f-d2d4-4684-a389-ac1adb050fa8 ->
> /rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/8cccc37f-d2d4-4684-a389-ac1adb050fa8
> lrwxrwxrwx. 1 vdsm kvm 92 4 gen 17.10
> f739b27a-35bf-49c7-a95b-a92ec5c10320 ->
> /rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data/f739b27a-35bf-49c7-a95b-a92ec5c10320
> lrwxrwxrwx. 1 vdsm kvm 84 4 gen 14.31 mastersd ->
> /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
This sounds like
https://bugzilla.redhat.com/1271771
This patch may fix this: https://gerrit.ovirt.org/#/c/27334/
Would you like to test it?
To dig deeper, we need the logs:
- /var/log/vdsm/vdsm.log (the one showing this timeframe)
- /var/log/sanlock.log
- /var/log/messages
- /var/log/glusterfs/<server>:<remotepath>-<date>.log
Nir
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
More information about the Users
mailing list