
Ok, i did a right click on storage domain and did destroy. It's get's imported and Engine VM too. Now it seems OK, Thank you very much. Best regards, Misak Khachatryan On Thu, Aug 31, 2017 at 5:11 PM, Misak Khachatryan <kmisak@gmail.com> wrote:
Hi,
it's grayed out on web interface, is there any other way? Trying to detach gives error
VDSM command DetachStorageDomainVDS failed: Storage domain does not exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',) Failed to detach Storage Domain hosted_storage from Data Center Default. (User: admin@internal-authz)
Best regards, Misak Khachatryan
On Thu, Aug 31, 2017 at 4:22 PM, Martin Sivak <msivak@redhat.com> wrote:
Hi,
you can remote the hosted engine storage domain from the engine as well. It should also be re-imported.
We had cases where destroying the domain ended up with a locked SD, but removing the SD and re-importing is the proper way here.
Best regards
PS: Re-adding the mailing list, we should really set a proper Reply-To header..
Martin Sivak
On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan <kmisak@gmail.com> wrote:
Hi,
I would love to, but:
Error while executing action:
HostedEngine:
Cannot remove VM. The relevant Storage Domain's status is Inactive.
it seems i should somehow fix storage domain first ...
engine=# update storage_domain_static set id = '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = 'c44343af-cc4a-4bb7-a548-0c6f609d60d5'; ERROR: update or delete on table "storage_domain_static" violates foreign key constraint "disk_profiles_storage_domain_id_fkey" on table "disk_profiles" DETAIL: Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still referenced from table "disk_profiles".
engine=# update disk_profiles set storage_domain_id = '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = 'a6d71571-a13a-415b-9f97-635f17cbe67d'; ERROR: insert or update on table "disk_profiles" violates foreign key constraint "disk_profiles_storage_domain_id_fkey" DETAIL: Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c) is not present in table "storage_domain_static".
engine=# select * from storage_domain_static; id | storage | storage_name | storage_domain_type | storage_type | storage_domain_format_type | _create_date | _update_date | recoverable | last_time_used_as_maste r | storage_description | storage_comment | wipe_after_delete | warning_low_space_indicator | critical_space_action_blocker | first_metadata_device | vg_metadata_device | discard_after_delete --------------------------------------+--------------------------------------+------------------------+---------------------+--------------+----------------------------+-------------------------------+-------------------------------+-------------+------------------------ --+---------------------+-----------------+-------------------+-----------------------------+-------------------------------+-----------------------+--------------------+---------------------- 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | 4 | 8 | 0 | 2016-11-02 21:27:22.118586+04 | | t | | | | f | | | | | f 51c903f6-df83-4510-ac69-c164742ca6e7 | 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso | 2 | 7 | 0 | 2016-11-02 23:26:21.296635+04 | | t | 0 | | | f | 10 | 5 | | | f ece1f05c-97c9-4482-a1a5-914397cddd35 | dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export | 3 | 1 | 0 | 2016-12-14 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t | 0 | Export | | f | 10 | 5 | | | f 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data | 0 | 7 | 4 | 2016-11-02 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t | 0 | | | f | 10 | 5 | | | f c44343af-cc4a-4bb7-a548-0c6f609d60d5 | 8b54ce35-3187-4fba-a2c7-6b604d077f5b | hosted_storage | 1 | 7 | 4 | 2016-11-02 23:26:13.165435+04 | 2017-02-22 17:20:42.721092+04 | t | 0 | | | f | 10 | 5 | | | f 004ca4dd-c621-463d-b514-ccfe07ef99d7 | b31a7de9-e789-4ece-9f99-4b150bf581db | virt4-Local | 0 | 4 | 4 | 2017-03-23 09:02:26.37006+04 | 2017-03-23 09:02:31.887534+04 | t | 0 | | | f | 10 | 5 | | | f (6 rows)
engine=# select * from storage_domain_dynamic; id | available_disk_size | used_disk_size | _update_date | external_status --------------------------------------+---------------------+----------------+-------------------------------+----------------- 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | | | | 0 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | 1102 | 313 | 2017-08-31 14:20:47.444292+04 | 0 51c903f6-df83-4510-ac69-c164742ca6e7 | 499 | 0 | 2017-08-31 14:20:47.45047+04 | 0 ece1f05c-97c9-4482-a1a5-914397cddd35 | 9669 | 6005 | 2017-08-31 14:20:47.454629+04 | 0 c44343af-cc4a-4bb7-a548-0c6f609d60d5 | | | 2017-08-31 14:18:37.199062+04 | 0 004ca4dd-c621-463d-b514-ccfe07ef99d7 | 348 | 1 | 2017-08-31 14:20:42.671688+04 | 0 (6 rows)
engine=# select * from disk_profiles; id | name | storage_domain_id | qos_id | description | _create_date | _update_date --------------------------------------+----------------+--------------------------------------+--------+-------------+-------------------------------+-------------- 04257bff-e95d-4380-b120-adcbe46ae213 | data | 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | | | 2016-11-02 23:24:43.528982+04 | a6d71571-a13a-415b-9f97-635f17cbe67d | hosted_storage | c44343af-cc4a-4bb7-a548-0c6f609d60d5 | | | 2016-11-02 23:26:13.178791+04 | 0f9ecdb7-4fca-45e7-9b5c-971b50d4c12e | virt4-Local | 004ca4dd-c621-463d-b514-ccfe07ef99d7 | | | 2017-03-23 09:02:26.409574+04 | (3 rows)
Best regards, Misak Khachatryan
On Thu, Aug 31, 2017 at 3:33 PM, Martin Sivak <msivak@redhat.com> wrote:
Hi,
I would not touch the database in this case. I would just delete the old hosted engine VM from the webadmin and wait for it to reimport itself.
But I haven't played with this mechanism for some time.
Best regards
Martin Sivak
On Thu, Aug 31, 2017 at 1:17 PM, Misak Khachatryan <kmisak@gmail.com> wrote:
Hi,
Yesterday someone powered off our storage, and all my 3 hosts lose their disks. After 2 days of recovering i managed to bring back everything, except engine VM, which is online but not visible to itself.
I did new deployment of VM, restored backup and started engine setup. After manual database updates, my all VMs and hosts are OK now, but engine. I have engine VM with different VM id running than in database.
I've tried this with no luck.
engine=# update vm_static set vm_guid = '75072b32-6f93-4c38-8f18-825004072c1a' where vm_guid =(select vm_guid from vm_static where vm_name = 'HostedEngine'); ERROR: update or delete on table "vm_static" violates foreign key constraint "fk_disk_vm_element_vm_static" on table "disk_vm_element" DETAIL: Key (vm_guid)=(d81ccb53-2594-49db-b69a-04c73b504c59) is still referenced from table "disk_vm_element".
Right now i've deployed engine on all 3 hosts but see this picture:
[root@virt3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@virt3 ~]# vdsClient -s 0 list
75072b32-6f93-4c38-8f18-825004072c1a Status = Up statusTime = 4397337690 kvmEnable = true emulatedMachine = pc afterMigrationStatus = pid = 5280 devices = [{'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': '2b6b0e87-c86a-4144-ad39-40d5bfe25df1', 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model': 'none'}, 'type': 'balloon', 'target': 16777216, 'alias': 'balloon0'}, {'specParams': {'source': 'random'}, 'alias': 'rng0', 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type': 'rng'}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'addr ess': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'ch annel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0', 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}} , {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x00 00', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'vnc', 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:01:29:95', 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'spec Params': {}, 'deviceId': 'd348a068-063b-4a40-9119-a3d34f6c7db4', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'al ias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': 'e738b50b-c200-4429-8489-4519325339c7', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolI D': '00000000-0000-0000-0000-000000000000', 'volumeInfo': {'path': 'engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d', 'protocol': 'gluster', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': ' virt1'}, {'port': '0', 'transport': 'tcp', 'name': 'virt2'}, {'port': '0', 'transport': 'tcp', 'name': 'virt3'}]}, 'index': '0', 'iface': 'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'readonly': 'False', 's hared': 'exclusive', 'truesize': '3255476224', 'type': 'disk', 'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'reqsize': '0', 'format': 'raw', 'deviceId': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'v olumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'alias': 'virtio-disk0', 'volumeChain': [{'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'leaseOffset': 0, 'volumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'leasePath': '/rhev/data-center/mnt/glusterSD/virt1:_engi ne/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d.lease', 'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'path': '/rhev/data-center/mnt/glusterSD/virt1:_engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c /images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d'}]}] guestDiskMapping = {'5deeac2d-18d7-4622-9': {'name': '/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}} vmType = kvm display = vnc memSize = 16384 cpuType = Westmere spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp = 4 vmName = HostedEngine clientIp = maxVCpus = 16 [root@virt3 ~]#
[root@virt3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : virt1.management.gnc.am Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : ef49e5b4 local_conf_timestamp : 7515 Host timestamp : 7512 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7512 (Thu Aug 31 15:14:59 2017) host-id=1 score=3400 vm_conf_refresh_time=7515 (Thu Aug 31 15:15:01 2017) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : virt3 Host ID : 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 4a85111c local_conf_timestamp : 102896 Host timestamp : 102893 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=102893 (Thu Aug 31 15:14:46 2017) host-id=3 score=3400 vm_conf_refresh_time=102896 (Thu Aug 31 15:14:49 2017) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
Also my storage domain for hosted engine is inactive, can't activate it it gives this error in web console:
VDSM command GetImagesListVDS failed: Storage domain does not exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',)
It seems I should fiddle with database a bit more, but is't scary thing for me.
Any help?
Best regards, Misak Khachatryan _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users