I have resolved this issue. I'll document what I did here in case someone else finds
this through Google or something. I basically found this note on this form (I believe)
which outlined the steps:
The following procedure should provide the solution:
1. set the storage domain to maintenance (via webadmin UI, for example)
2. copy/sync the contents of the storage domain including the metadata, to ensure that
data in both locations (the old and the new mount points) is the same.
3. run modification query on ovirt engine database (please replace the values
'yournewmountpoint' and 'therelevantconnectionid' with the correct ones:
UPDATE storage_server_connections
SET connection='yournewmountpoint'
WHERE id='therelevantconnectionid';
4. There is a bug related to storage domain caching in VDSM (host) , so it needs to be
workarounded by restarting vdsm (service name is 'vdsmd')
5. activate storage domain (via webadmin UI, for example).
So, since this was the storage domain that had the hosted_storage on it, I had to do more
steps for safety:
- stop all VMs
- shutdown all hosts except for the one running hosted-engine
- enter global maintenance
- log into the hosted-engine VM
- systemctl stop ovirt-engine
- use psql as above to edit the connection information for the storage domain I was
interested in
- reboot the host running the hosted-engine
- exit global maintenance
- reboot all the other hosts and bring them back into the cluster
- restart all VMs
It certainly is possible that this global shutdown wasn't totally necessary, but this
cluster isn't in production yet and I thought this was the safest course of action.
Tod Pike