I know a way, but I'm not sure if you will be happy with that approach.
1. Setup virsh on all hosts:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
2. Set the HostedEngine in maintenance
hosted-engine --set-maintenance --mode=global
3. Get the HostedEngine's current config
3.1. When Engine is running - go to the host and type:
virsh dumpxml HostedEngine > HostedEngine-2019-12-08.xml
3.2. If the engine is dead - take the xml from the vdsm.log (when it was last started)
4. Edit the configuration
4.1. If the engine is running -> edit directly with virsh edit HostedEngine
4.2. If the engine is not running -> edit the xml with the new values and define the ovirtmgmt network and the VM
vdsm-ovirtmgmt.xml :
<network>
<name>vdsm-ovirtmgmt</name>
<uuid>8ded486e-e681-4754-af4b-5737c2b05405</uuid>
<forward mode='bridge'/>
<bridge name='ovirtmgmt'/>
</network>
virsh net-define vdsm-ovirtmgmt.xml
virsh define HostedEngine.xml
4.3 You may try to power up the VM but it will complain about some links from the gluster's storage to local one. Just create those symbolic links and power up the VM again
virsh start HostedEngine
5. Keep the engine running for at least 6 hours (to update the OVMF),as even when the OVMF update inteval is reduces -> the engine still ignores that.
6. Power off the engine gracefully
ssh engine "poweroff"
7. Power on the engine via
hosted-engine --vm-start
8.Once the engine is up , ssh and verify memory
9. Remove maintenance
hosted-engine --set-maintenance --mode=none
I haven't tested this approach of extending memory , but it should work.
Best Regards,
Strahil Nikolov