[ovirt-users] Gluster storage domain error after upgrading to 3.6

Nir Soffer nsoffer at redhat.com
Fri Nov 6 10:49:50 UTC 2015


On Fri, Nov 6, 2015 at 10:38 AM, Stefano Danzi <s.danzi at hawai.it> wrote:
> I pathced the code for "emergency"....

A safer way is to downgrade vdsm to the previous version.

> I can't find how change confguration.

1. Put the gluster domain in maintenance
    - select the domain in the storage tab
    - in the "data center" sub tab, click maintenance
    - the domain will turn to locked, and then to maintenance mode

2. Edit the domain
3. Change the address to the same one configured in gluster (ovirt01...)
4. Activate the domain

> But I think that's a bug:
>
> - All work in ovirt 3.5, after upgrade stop working

Yes, it should keep working after an upgrade

However, using straightforward setup, like using same server address
in both ovirt and
gluster will increase the chance that things will continue to work
after an upgrade.

> - The log show a python exception.

This is good, and makes debugging this issue easy.

> I think a thing:
>
> If there are chages on configuration requirements I have to be warned during
> upgrade, or I have to find a specific error in log.

Correct

> ...remove something that non exist from a list,

The code assumes that the server address is in the list, so removing
it is correct.

This assumption is wrong, we will have to change to code to handle this case.

> and leave a cryptic python
> exception as error log, isn't the better solution...

The traceback in the log is very important, without it would be very
hard to debug this issue.

>
>
> Il 06/11/2015 8.12, Nir Soffer ha scritto:
>
>
> בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi" <s.danzi at hawai.it> כתב:
>>
>> To temporary solve the problem I patched storageserver.py as suggested on
>> link above.
>
> I would not patch the code but change the configuration.
>
>> I can't find a related issue on bugzilla.
>
> Would you file bug about this?
>
>>
>>
>> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>>
>>> My error is related to this message:
>>>
>>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>>
>>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:
>>>>
>>>> Hello,
>>>> I have an Ovirt installation with only 1 host and self-hosted engine.
>>>> My Master Data storage domain is GlusterFS type.
>>>>
>>>> After upgrading to Ovirt 3.6 data storage domain and default dataceter
>>>> are down.
>>>> The error in vdsm.log is:
>>>>
>>>> Thread-6585::DEBUG::2015-11-04
>>>> 23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> state
>>>> preparin
>>>> g
>>>> Thread-6585::INFO::2015-11-04
>>>> 23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect:
>>>> connectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000', conLi
>>>> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection':
>>>> u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': u'1',
>>>> u'vfs_type': u'glusterfs', u'password':
>>>>  '********', u'port': u''}], options=None)
>>>> Thread-6585::DEBUG::2015-11-04
>>>> 23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating
>>>> directory: /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data
>>>> mode: Non
>>>> e
>>>> Thread-6585::WARNING::2015-11-04
>>>> 23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir
>>>> /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already exists
>>>> Thread-6585::ERROR::2015-11-04
>>>> 23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not
>>>> connect to storageServer
>>>> Traceback (most recent call last):
>>>>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in
>>>> connectStorageServer
>>>>     conObj.connect()
>>>>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
>>>>     self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
>>>>     backup_servers_option = self._get_backup_servers_option()
>>>>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in
>>>> _get_backup_servers_option
>>>>     servers.remove(self._volfileserver)
>>>> ValueError: list.remove(x): x not in list
>>>> Thread-6585::DEBUG::2015-11-04
>>>> 23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
>>>> {46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
>>>> Thread-6585::INFO::2015-11-04
>>>> 23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect:
>>>> connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id':
>>>> u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>>> Thread-6585::DEBUG::2015-11-04
>>>> 23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist':
>>>> [{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>>> Thread-6585::DEBUG::2015-11-04
>>>> 23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state preparing ->
>>>> state finished
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>



More information about the Users mailing list