[ovirt-users] vdsm storage problem - maybe cache problem?
Maor Lipchuk
mlipchuk at redhat.com
Mon May 18 12:21:06 UTC 2015
Hi Mario,
Can u try to mount this directly from the Host?
Can u please attach the VDSM and engine logs
Thanks,
Maor
----- Original Message -----
> From: ml at ohnewald.net
> To: "Maor Lipchuk" <mlipchuk at redhat.com>
> Cc: users at ovirt.org
> Sent: Monday, May 18, 2015 2:36:38 PM
> Subject: Re: [ovirt-users] vdsm storage problem - maybe cache problem?
>
> Hi Maor,
>
> thanks for the quick reply.
>
> Am 18.05.15 um 13:25 schrieb Maor Lipchuk:
>
> >> Now my Question: Why does the vdsm node not know that i deleted the
> >> storage? Has the vdsm cached this mount informations? Why does it still
> >> try to access 036b5575-51fa-4f14-8b05-890d7807894c?
> >
> >
> > Yes, the vdsm use a cache for Storage Domains, you can try to restart the
> > vdsmd service instead of rebooting the host.
> >
>
> I am still getting the same error.
>
>
> [root at ovirt-node01 ~]# /etc/init.d/vdsmd stop
> Shutting down vdsm daemon:
> vdsm watchdog stop [ OK ]
> vdsm: Running run_final_hooks [ OK ]
> vdsm stop [ OK ]
> [root at ovirt-node01 ~]#
> [root at ovirt-node01 ~]#
> [root at ovirt-node01 ~]#
> [root at ovirt-node01 ~]# ps aux | grep vdsmd
> root 3198 0.0 0.0 11304 740 ? S< May07 0:00
> /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon
> --masterpid /var/run/vdsm/supervdsm_respawn.pid
> /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock
> --pidfile /var/run/vdsm/supervdsmd.pid
> root 3205 0.0 0.0 922368 26724 ? S<l May07 12:10
> /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile
> /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid
> root 15842 0.0 0.0 103248 900 pts/0 S+ 13:35 0:00 grep vdsmd
>
>
> [root at ovirt-node01 ~]# /etc/init.d/vdsmd start
> initctl: Job is already running: libvirtd
> vdsm: Running mkdirs
> vdsm: Running configure_coredump
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running gencerts
> vdsm: Running check_is_configured
> libvirt is already configured for vdsm
> sanlock service is already configured
> vdsm: Running validate_configuration
> SUCCESS: ssl configured to true. No conflicts
> vdsm: Running prepare_transient_repository
> vdsm: Running syslog_available
> vdsm: Running nwfilter
> vdsm: Running dummybr
> vdsm: Running load_needed_modules
> vdsm: Running tune_system
> vdsm: Running test_space
> vdsm: Running test_lo
> vdsm: Running restore_nets
> vdsm: Running unified_network_persistence_upgrade
> vdsm: Running upgrade_300_nets
> Starting up vdsm daemon:
> vdsm start [ OK ]
> [root at ovirt-node01 ~]#
>
> [root at ovirt-node01 ~]# grep ERROR /var/log/vdsm/vdsm.log | tail -n 20
> Thread-13::ERROR::2015-05-18
> 13:35:03,631::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
> Thread-13::ERROR::2015-05-18
> 13:35:03,632::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
> Thread-36::ERROR::2015-05-18
> 13:35:11,607::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:11,621::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:11,960::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
> 036b5575-51fa-4f14-8b05-890d7807894c not found
> Thread-36::ERROR::2015-05-18
> 13:35:11,960::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
> monitoring information
> Thread-36::ERROR::2015-05-18
> 13:35:21,962::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:21,965::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:22,068::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
> 036b5575-51fa-4f14-8b05-890d7807894c not found
> Thread-36::ERROR::2015-05-18
> 13:35:22,072::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
> monitoring information
> Thread-15::ERROR::2015-05-18
> 13:35:33,821::task::866::TaskManager.Task::(_setError)
> Task=`54bdfc77-f63a-493b-b24e-e5a3bc4977bb`::Unexpected error
> Thread-15::ERROR::2015-05-18
> 13:35:33,864::dispatcher::65::Storage.Dispatcher.Protect::(run)
> {'status': {'message': "Unknown pool id, pool not connected:
> ('b384b3da-02a6-44f3-a3f6-56751ce8c26d',)", 'code': 309}}
> Thread-13::ERROR::2015-05-18
> 13:35:33,930::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
> Thread-15::ERROR::2015-05-18
> 13:35:33,928::task::866::TaskManager.Task::(_setError)
> Task=`fe9bb0fa-cf1e-4b21-af00-0698c6d1718f`::Unexpected error
> Thread-13::ERROR::2015-05-18
> 13:35:33,932::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
> Thread-15::ERROR::2015-05-18
> 13:35:33,978::dispatcher::65::Storage.Dispatcher.Protect::(run)
> {'status': {'message': 'Not SPM: ()', 'code': 654}}
> Thread-36::ERROR::2015-05-18
> 13:35:41,117::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:41,131::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
> Thread-36::ERROR::2015-05-18
> 13:35:41,452::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
> 036b5575-51fa-4f14-8b05-890d7807894c not found
> Thread-36::ERROR::2015-05-18
> 13:35:41,453::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
> monitoring information
>
>
> Thanks,
> Mario
>
>
More information about the Users
mailing list