In any case, open a bug for better tracking.
Add the log and steps for reproduce

On Mon, Mar 7, 2016 at 10:44 AM, <nicolas@devels.es> wrote:
Hi Fred,

I'm attaching the required logs. As this was urgent for us, we finally put all hosts on maintenance and removed the storage this way. However, we have another oVirt infrastructure where soon we'll need to do the same steps, so if needed we can do some tests on it.

If you prefer I can open a bug report.

Regards.

El 2016-03-06 12:57, Fred Rolland escribió:
Hi,

Can you please attach the full logs (VDSM and engine) ?

Thanks,

Fred

On Wed, Mar 2, 2016 at 3:19 PM, <nicolas@devels.es> wrote:

Hi,

We've migrated storage from glusterfs to iSCSI, so now we have 2
storages in our data center. As we've already finished, we want to
remove the gluster storage from our data center (which is the master
storage right now).

We've tried to put it on maintenance but we're getting this error:

2016-03-02 13:02:02,087 ERROR

[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [259a3130] Command
'DisconnectStoragePoolVDSCommand(HostName = ovirt01.domain.com [1],
DisconnectStoragePoolVDSCommandParameters:{runAsync='true',
hostId='c31dca1a-e5bc-43f6-940f-6397e3ddbee4',
storagePoolId='fa155d43-4e68-486f-9f9d-ae3e3916cc4f',
vds_spm_id='7'})' execution failed: VDSGenericException:
VDSErrorException: Failed to DisconnectStoragePoolVDS, error =
Operation not allowed while SPM is active:
('fa155d43-4e68-486f-9f9d-ae3e3916cc4f',), code = 656

Does that mean that *all* hosts must be on maintenance to do that?
There's nothing left on the gluster storage right now.

Thanks.

Nicolás
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [2]



Links:
------
[1] http://ovirt01.domain.com
[2] http://lists.ovirt.org/mailman/listinfo/users