I may have made matters worse. So I changed to 4.3 compatible cluster then 4.3 compatible data center. All VMs were marked as requiring a reboot. I restarted a couple of them and none of them will start up, they are saying "bad volume specification". The ones running that I did not yet restart are still running ok. I need to figure out why the VMs aren't restarting.
Here is an example from vdsm.log
olumeError: Bad volume specification {'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}, 'serial': 'd81a6826-dc46-44db-8de7-405d30e44d57', 'index': 0, 'iface': 'virtio', 'apparentsize': '64293699584', 'specParams': {}, 'cache': 'none', 'imageID': 'd81a6826-dc46-44db-8de7-405d30e44d57', 'truesize': '64293814272', 'type': 'disk', 'domainID': '1f2e9989-9ab3-43d5-971d-568b8feca918', 'reqsize': '0', 'format': 'cow', 'poolID': 'a45e442e-9989-11e8-b0e4-00163e4bf18a', 'device': 'disk', 'path': '/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/1f2e9989-9ab3-43d5-971d-568b8feca918/images/d81a6826-dc46-44db-8de7-405d30e44d57/2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': '2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'diskType': 'file', 'alias': 'ua-d81a6826-dc46-44db-8de7-405d30e44d57', 'discard': False}