Hello:
I have an oVirt 4.4.10 cluster that is working fine. All storage is on NFS. I would like
to change the mount point for the hosted_storage domain from localhost:/... to
<IP>/... This will be the same physical volume, all I want to do is not run my NFS
mounts through my local hosts but instead mount directly from the NFS server.
I have used the "hosted-engine --set-shared-config storage" command to change
the mount point for the storage. Looking at the hosted-engine.conf file confirms that the
new path is set correctly. When I look at the storage inside the hosted-engine, however,
it still shows the old value. How can I get the cluster to use the new path instead of the
old path? I changed it both using he_local and he_shared keys. Thanks!
Tod Pike
Show replies by date
I have resolved this issue. I'll document what I did here in case someone else finds
this through Google or something. I basically found this note on this form (I believe)
which outlined the steps:
The following procedure should provide the solution:
1. set the storage domain to maintenance (via webadmin UI, for example)
2. copy/sync the contents of the storage domain including the metadata, to ensure that
data in both locations (the old and the new mount points) is the same.
3. run modification query on ovirt engine database (please replace the values
'yournewmountpoint' and 'therelevantconnectionid' with the correct ones:
UPDATE storage_server_connections
SET connection='yournewmountpoint'
WHERE id='therelevantconnectionid';
4. There is a bug related to storage domain caching in VDSM (host) , so it needs to be
workarounded by restarting vdsm (service name is 'vdsmd')
5. activate storage domain (via webadmin UI, for example).
So, since this was the storage domain that had the hosted_storage on it, I had to do more
steps for safety:
- stop all VMs
- shutdown all hosts except for the one running hosted-engine
- enter global maintenance
- log into the hosted-engine VM
- systemctl stop ovirt-engine
- use psql as above to edit the connection information for the storage domain I was
interested in
- reboot the host running the hosted-engine
- exit global maintenance
- reboot all the other hosts and bring them back into the cluster
- restart all VMs
It certainly is possible that this global shutdown wasn't totally necessary, but this
cluster isn't in production yet and I thought this was the safest course of action.
Tod Pike