Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.
by Benny Zlotnik
[Adding ovirt-users]
On Sun, Jul 16, 2017 at 12:58 PM, Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
> We can see a lot of related errors in the engine log but we are unable
> to correlate to the vdsm log. Do you have more hosts? If yes, please
> attach their logs as well.
> And just to be sure you were attempting to perform cold merge?
>
> On Fri, Jul 14, 2017 at 7:32 PM, Devin Acosta <devin(a)pabstatencio.com> wrote:
>>
>> You can get my logs from:
>>
>> https://files.linuxstack.cloud/s/NjoyMF11I38rJpH
>>
>> They were a little to big to attach to this e-mail. Would like to know if
>> this is the similar bug that Richard indicated is a possibility.
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || devin(a)linuxguru.co
>>
>> On July 14, 2017 at 9:18:08 AM, Devin Acosta (devin(a)pabstatencio.com) wrote:
>>
>> I have attached the logs.
>>
>>
>>
>> --
>>
>> Devin Acosta
>> Red Hat Certified Architect, LinuxStack
>> 602-354-1220 || devin(a)linuxguru.co
>>
>> On July 13, 2017 at 9:22:03 AM, richard anthony falzini
>> (richardfalzini(a)gmail.com) wrote:
>>
>> Hi,
>> i have the same problem with gluster.
>> this is a bug that i opened
>> https://bugzilla.redhat.com/show_bug.cgi?id=1461029 .
>> In the bug i used single disk vm but i start to notice the problem with
>> multiple disk vm.
>>
>>
>> 2017-07-13 0:07 GMT+02:00 Devin Acosta <devin(a)pabstatencio.com>:
>>>
>>> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
>>> question has multiple Disks (4 to be exact). It snapshotted OK while on
>>> iSCSI however when I went to delete the single snapshot that existed it went
>>> into Locked state and never came back. The deletion has been going for well
>>> over an hour, and I am not convinced since the snapshot is less than 12
>>> hours old that it’s really doing anything.
>>>
>>> I have seen that doing some Googling indicates there might be some known
>>> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>>>
>>> In the logs on the engine it shows:
>>>
>>> 2017-07-12 21:59:42,473Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 21:59:52,480Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 21:59:52,483Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:02,490Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:02,493Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:12,498Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:12,501Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>> 2017-07-12 22:00:22,508Z INFO
>>> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
>>> child command id: '75c535fd-4558-459a-9992-875c48578a97'
>>> type:'ColdMergeSnapshotSingleDisk' to complete
>>> 2017-07-12 22:00:22,511Z INFO
>>> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
>>> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
>>> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
>>> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
>>> type:'PrepareMerge' to complete
>>>
>>> This is what I seen on the SPM when I grep’d the Snapshot ID.
>>>
>>> 2017-07-12 14:22:18,773-0700 INFO (jsonrpc/6) [vdsm.api] START
>>> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
>>> volFormat=4, preallocate=2, diskType=2,
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
>>> srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
>>> from=::ffff:10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:46)
>>> 2017-07-12 14:22:19,095-0700 WARN (tasks/6) [root] File:
>>> /rhev/data-center/00000001-0001-0001-0001-000000000311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
>>> already removed (utils:120)
>>> 2017-07-12 14:22:19,096-0700 INFO (tasks/6) [storage.Volume] Request to
>>> create snapshot
>>> 6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845 of
>>> volume
>>> 6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
>>> (blockVolume:545)
>>> 2017-07-12 14:22:19,676-0700 INFO (tasks/6) [storage.LVM] Change LV tags
>>> (vg=0c02a758-4295-4199-97de-b041744b3b15,
>>> lv=5921ba71-0f00-46cd-b0be-3c2ac1396845, delTags=['OVIRT_VOL_INITIALIZING'],
>>> addTags=['MD_10', u'PU_0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
>>> u'IU_6a887015-67cd-4f7b-b709-eef97142258d']) (lvm:1344)
>>> 2017-07-12 14:22:36,010-0700 INFO (jsonrpc/5) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:46)
>>> 2017-07-12 14:22:36,077-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
>>> Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 5921ba71-0f00-46cd-b0be-3c2ac1396845 (volume:238)
>>> 2017-07-12 14:22:36,185-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
>>> 0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
>>> info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
>>> 'voltype': 'LEAF', 'description': '', 'parent':
>>> '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW', 'generation': 0,
>>> 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499894539',
>>> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
>>> '1073741824', 'children': [], 'pool': '', 'capacity': '107374182400',
>>> 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize': '1073741824',
>>> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
>>> 2017-07-12 14:22:36,186-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
>>> getVolumeInfo return={'info': {'status': 'OK', 'domain':
>>> '0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
>>> '', 'parent': '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW',
>>> 'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
>>> '1499894539', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
>>> 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity':
>>> '107374182400', 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize':
>>> '1073741824', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
>>> from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
>>> (api:52)
>>> 2017-07-12 14:24:24,854-0700 INFO (jsonrpc/1) [vdsm.api] START
>>> deleteVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volumes=[u'5921ba71-0f00-46cd-b0be-3c2ac1396845'], postZero=u'false',
>>> force=u'false', discard=False) from=::ffff:10.4.64.7,60016,
>>> flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:46)
>>> 2017-07-12 14:24:25,010-0700 INFO (tasks/7) [storage.Volume] Request to
>>> delete LV 5921ba71-0f00-46cd-b0be-3c2ac1396845 of image
>>> 6a887015-67cd-4f7b-b709-eef97142258d in VG
>>> 0c02a758-4295-4199-97de-b041744b3b15 (blockVolume:579)
>>> 2017-07-12 14:24:25,130-0700 INFO (tasks/7) [storage.VolumeManifest]
>>> sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 5921ba71-0f00-46cd-b0be-3c2ac1396845 legality = ILLEGAL (volume:398)
>>> 2017-07-12 14:24:38,881-0700 INFO (jsonrpc/2) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:46)
>>> 2017-07-12 14:24:49,911-0700 INFO (jsonrpc/1) [vdsm.api] START
>>> getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
>>> spUUID=u'00000001-0001-0001-0001-000000000311',
>>> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
>>> volUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', options=None)
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:46)
>>> 2017-07-12 14:24:49,912-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
>>> Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
>>> imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
>>> 0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae (volume:238)
>>> 2017-07-12 14:24:50,036-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
>>> 0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
>>> info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
>>> 'voltype': 'LEAF', 'description': '', 'parent':
>>> '00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0,
>>> 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499885619',
>>> 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
>>> '110729625600', 'children': [], 'pool': '', 'capacity': '107374182400',
>>> 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize': '110729625600',
>>> 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
>>> 2017-07-12 14:24:50,037-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
>>> getVolumeInfo return={'info': {'status': 'OK', 'domain':
>>> '0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
>>> '', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW',
>>> 'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
>>> '1499885619', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
>>> 'apparentsize': '110729625600', 'children': [], 'pool': '', 'capacity':
>>> '107374182400', 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize':
>>> '110729625600', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}}
>>> from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
>>> (api:52)
>>>
>>> HELP, Right now I am starting to think Block Storage and oVIRT = BAD!
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Devin Acosta
>>> Red Hat Certified Architect, LinuxStack
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
7 years, 4 months
oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.
by Devin Acosta
We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
question has multiple Disks (4 to be exact). It snapshotted OK while on
iSCSI however when I went to delete the single snapshot that existed it
went into Locked state and never came back. The deletion has been going for
well over an hour, and I am not convinced since the snapshot is less than
12 hours old that it’s really doing anything.
I have seen that doing some Googling indicates there might be some known
issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
In the logs on the engine it shows:
2017-07-12 21:59:42,473Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 21:59:52,480Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 21:59:52,483Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:02,490Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:02,493Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:12,498Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:12,501Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
2017-07-12 22:00:22,508Z INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
child command id: '75c535fd-4558-459a-9992-875c48578a97'
type:'ColdMergeSnapshotSingleDisk' to complete
2017-07-12 22:00:22,511Z INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
type:'PrepareMerge' to complete
This is what I seen on the SPM when I grep’d the Snapshot ID.
2017-07-12 14:22:18,773-0700 INFO (jsonrpc/6) [vdsm.api] START
createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
volFormat=4, preallocate=2, diskType=2,
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
from=::ffff:10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
(api:46)
2017-07-12 14:22:19,095-0700 WARN (tasks/6) [root] File:
/rhev/data-center/00000001-0001-0001-0001-000000000311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
already removed (utils:120)
2017-07-12 14:22:19,096-0700 INFO (tasks/6) [storage.Volume] Request to
create snapshot
6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
of volume
6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
(blockVolume:545)
2017-07-12 14:22:19,676-0700 INFO (tasks/6) [storage.LVM] Change LV tags
(vg=0c02a758-4295-4199-97de-b041744b3b15,
lv=5921ba71-0f00-46cd-b0be-3c2ac1396845,
delTags=['OVIRT_VOL_INITIALIZING'], addTags=['MD_10',
u'PU_0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
u'IU_6a887015-67cd-4f7b-b709-eef97142258d']) (lvm:1344)
2017-07-12 14:22:36,010-0700 INFO (jsonrpc/5) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
from=::ffff:10.4.64.7,59664, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
(api:46)
2017-07-12 14:22:36,077-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
5921ba71-0f00-46cd-b0be-3c2ac1396845 (volume:238)
2017-07-12 14:22:36,185-0700 INFO (jsonrpc/5) [storage.VolumeManifest]
0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
'voltype': 'LEAF', 'description': '', 'parent':
'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW', 'generation': 0,
'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499894539',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'1073741824', 'children': [], 'pool': '', 'capacity': '107374182400',
'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845', 'truesize': '1073741824',
'type': 'SPARSE', 'lease': {'owners': [], 'version': None}} (volume:272)
2017-07-12 14:22:36,186-0700 INFO (jsonrpc/5) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
'', 'parent': '0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'format': 'COW',
'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
'1499894539', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity':
'107374182400', 'uuid': u'5921ba71-0f00-46cd-b0be-3c2ac1396845',
'truesize': '1073741824', 'type': 'SPARSE', 'lease': {'owners': [],
'version': None}}} from=::ffff:10.4.64.7,59664,
flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e (api:52)
2017-07-12 14:24:24,854-0700 INFO (jsonrpc/1) [vdsm.api] START
deleteVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volumes=[u'5921ba71-0f00-46cd-b0be-3c2ac1396845'], postZero=u'false',
force=u'false', discard=False) from=::ffff:10.4.64.7,60016,
flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:46)
2017-07-12 14:24:25,010-0700 INFO (tasks/7) [storage.Volume] Request to
delete LV 5921ba71-0f00-46cd-b0be-3c2ac1396845 of image
6a887015-67cd-4f7b-b709-eef97142258d in VG
0c02a758-4295-4199-97de-b041744b3b15 (blockVolume:579)
2017-07-12 14:24:25,130-0700 INFO (tasks/7) [storage.VolumeManifest]
sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
5921ba71-0f00-46cd-b0be-3c2ac1396845 legality = ILLEGAL (volume:398)
2017-07-12 14:24:38,881-0700 INFO (jsonrpc/2) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', options=None)
from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
(api:46)
2017-07-12 14:24:49,911-0700 INFO (jsonrpc/1) [vdsm.api] START
getVolumeInfo(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
spUUID=u'00000001-0001-0001-0001-000000000311',
imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
volUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', options=None)
from=::ffff:10.4.64.7,59664, flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e
(api:46)
2017-07-12 14:24:49,912-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
Info request: sdUUID=0c02a758-4295-4199-97de-b041744b3b15
imgUUID=6a887015-67cd-4f7b-b709-eef97142258d volUUID =
0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae (volume:238)
2017-07-12 14:24:50,036-0700 INFO (jsonrpc/1) [storage.VolumeManifest]
0c02a758-4295-4199-97de-b041744b3b15/6a887015-67cd-4f7b-b709-eef97142258d/0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae
info is {'status': 'OK', 'domain': '0c02a758-4295-4199-97de-b041744b3b15',
'voltype': 'LEAF', 'description': '', 'parent':
'00000000-0000-0000-0000-000000000000', 'format': 'COW', 'generation': 0,
'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime': '1499885619',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'110729625600', 'children': [], 'pool': '', 'capacity': '107374182400',
'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', 'truesize':
'110729625600', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}
(volume:272)
2017-07-12 14:24:50,037-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'0c02a758-4295-4199-97de-b041744b3b15', 'voltype': 'LEAF', 'description':
'', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'COW',
'generation': 0, 'image': '6a887015-67cd-4f7b-b709-eef97142258d', 'ctime':
'1499885619', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
'apparentsize': '110729625600', 'children': [], 'pool': '', 'capacity':
'107374182400', 'uuid': u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae',
'truesize': '110729625600', 'type': 'SPARSE', 'lease': {'owners': [],
'version': None}}} from=::ffff:10.4.64.7,59664,
flow_id=c5e4bda4-9cd3-461d-8164-51d5614b995e (api:52)
HELP, Right now I am starting to think Block Storage and oVIRT = BAD!
--
Devin Acosta
Red Hat Certified Architect, LinuxStack
7 years, 4 months
oVirt Metrics
by Arsène Gschwind
This is a multi-part message in MIME format.
--------------9A99E910BE6E6C7F1C6C0B2C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi all,
I'm trying to setup oVirt metrics as described at
https://www.ovirt.org/develop/release-management/features/engine/metrics-...
using SSO.
My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.
I'm missing the SSO tool called ovirt-register-sso-client as written in
the doc to register new SSO client. I couldn't figure out which package
contains that tool, is it included in the latest distribution ?
Thanks for any help.
rgds,
Arsène
--
*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 | CH-4056 Basel | Switzerland
Tel. +41 79 449 25 63 | http://its.unibas.ch <http://its.unibas.ch/>
ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
--------------9A99E910BE6E6C7F1C6C0B2C
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi all,</p>
<p>I'm trying to setup oVirt metrics as described at
<a class="moz-txt-link-freetext" href="https://www.ovirt.org/develop/release-management/features/engine/metrics-...">https://www.ovirt.org/develop/release-management/features/engine/metrics-...</a>
using SSO.<br>
My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.</p>
<p>I'm missing the SSO tool called ovirt-register-sso-client as
written in the doc to register new SSO client. I couldn't figure
out which package contains that tool, is it included in the latest
distribution ?</p>
<p>Thanks for any help.</p>
<p>rgds,<br>
Arsène<br>
</p>
<div class="moz-signature">-- <br>
<p class="western" style="margin-bottom: 0in; line-height: 150%">
<font color="#000000"><font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> <b>Arsène Gschwind</b> </font>
</font>
<font color="#000000"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> </font> </font>
</font>
<font face="Tahoma, serif"> <font style="font-size: 8pt"
size="1"> </font>
</font>
<font face="Tahoma, serif">
</font>
<font color="#000000"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> <br>
</font> </font>
</font>
<font color="#7f7f7f"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> Fa. Sapify AG im
Auftrag der Universität Basel<br>
IT Services<br>
Klingelbergstr. 70 | CH-4056 Basel | Switzerland<br>
Tel. +41 79 449 25 63 | </font> </font>
</font>
<a href="http://its.unibas.ch/"> <font face="Tahoma, serif">
<font style="font-size: 8pt" size="1">
http://its.unibas.ch </font> </font>
</a><br>
<font color="#7f7f7f"> <font face="Tahoma, serif"> <font
style="font-size: 8pt" size="1"> ITS-ServiceDesk:
<a class="moz-txt-link-abbreviated" href="mailto:support-its@unibas.ch">support-its(a)unibas.ch</a> | +41 61 267 14 11 </font> </font>
</font>
</font></p>
<font color="#000000">
</font></div>
</body>
</html>
--------------9A99E910BE6E6C7F1C6C0B2C--
7 years, 4 months
Fwd: Windows USB Redirection
by Abi Askushi
Hi All,
I have Ovirt 4.1 with 3 nodes on top glusterfs.
I Have 2 VMs: Windows 2016 64bit and Windows 10 64bit
When I attach a USB flash disk to a VM (from host devices) the VM cannot
see the USB drive and report driver issue at device manager (see attached).
This happens with both VMs when tested.
When testing with Windows 7 or Windows XP USB is attached and accessed
normally.
Have you encountered such issue?
I have installed latest guest tools on both VMs.
Many thanx
7 years, 4 months
Removing iSCSI domain: host side part
by Gianluca Cecchi
Hello,
I have cleanly removed an iSCSI domain from oVirt. There is another one
(connecting to another storage array) that is the master domain.
But I see that oVirt hosts still maintain the iscsi session to the LUN.
So I want to clean from os point of view before removing the LUN itself
from storage.
At the moment I still see the multipath lun on both hosts
[root@ov301 network-scripts]# multipath -l
. . .
364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
size=4.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 9:0:0:0 sde 8:64 active undef running
`- 10:0:0:0 sdf 8:80 active undef running
and
[root@ov301 network-scripts]# iscsiadm -m session
tcp: [1] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
tcp: [2] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
. . .
Do I have to clean the multipath paths and multipath device and then iSCSI
logout, or is it sufficient to iSCSI logout and the multipath device and
its path will be cleanly removed from OS point of view?
I would like not to have multipath device in stale condition.
Thanks
Gianluca
7 years, 4 months
Failed to create template
by aduckers
I’m running 4.1 with a hosted engine, using FC SAN storage. I’ve uploaded a qcow2 image, then created a VM and attached that image.
When trying to create a template from that VM, we get failures with:
failed: low level image copy failed
VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
failed to create template
What should I be looking at to resolve this? Anyone recognize this issue?
Thanks
7 years, 4 months
Re: [ovirt-users] Detach disk from one VM, attach to another VM
by Victor José Acosta Domínguez
Hello
Yes it is possible, you must remove from first VM's configuration (be
careful, do not delete your virtual disk)
After that you can attach ad that disk to another VM
Process should be:
- Detach disk from VM1
- Delete disk from VM1's configuration
- Attach disk to VM2
Victor Acosta
7 years, 4 months
2 hosts starting the engine at the same time?
by Gianluca Cecchi
Hello.
I'm on 4.1.3 with self hosted engine and glusterfs as storage.
I updated the kernel on engine so I executed these steps:
- enable global maintenace from the web admin gui
- wait some minutes
- shutdown the engine vm from inside its OS
- wait some minutes
- execute on one host
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
I see that the qemu-kvm process for the engine starts on two hosts and then
on one of them it gets a "kill -15" and stops
Is it expected behaviour? It seems somehow dangerous to me..
- when in maintenance
[root@ovirt02 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2597
stopped : False
Local maintenance : False
crc32 : 7931c5c3
local_conf_timestamp : 19811
Host timestamp : 19794
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=19794 (Sun Jul 9 21:31:50 2017)
host-id=1
score=2597
vm_conf_refresh_time=19811 (Sun Jul 9 21:32:06 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 616ceb02
local_conf_timestamp : 2829
Host timestamp : 2812
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2812 (Sun Jul 9 21:31:52 2017)
host-id=2
score=3400
vm_conf_refresh_time=2829 (Sun Jul 9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 871204b2
local_conf_timestamp : 24584
Host timestamp : 24567
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24567 (Sun Jul 9 21:31:52 2017)
host-id=3
score=3400
vm_conf_refresh_time=24584 (Sun Jul 9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt02 ~]#
- then I exit global maintenance
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
- During monitoring of status, at some point I see "EngineStart" on both
host2 and host3
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3230
stopped : False
Local maintenance : False
crc32 : 25cadbfb
local_conf_timestamp : 20055
Host timestamp : 20040
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20040 (Sun Jul 9 21:35:55 2017)
host-id=1
score=3230
vm_conf_refresh_time=20055 (Sun Jul 9 21:36:11 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : e6951128
local_conf_timestamp : 3075
Host timestamp : 3058
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3058 (Sun Jul 9 21:35:59 2017)
host-id=2
score=3400
vm_conf_refresh_time=3075 (Sun Jul 9 21:36:15 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 382efde5
local_conf_timestamp : 24832
Host timestamp : 24816
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24816 (Sun Jul 9 21:36:01 2017)
host-id=3
score=3400
vm_conf_refresh_time=24832 (Sun Jul 9 21:36:17 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
[root@ovirt02 ~]#
and then
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3253
stopped : False
Local maintenance : False
crc32 : 3fc39f31
local_conf_timestamp : 20087
Host timestamp : 20070
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20070 (Sun Jul 9 21:36:26 2017)
host-id=1
score=3253
vm_conf_refresh_time=20087 (Sun Jul 9 21:36:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4a05c31e
local_conf_timestamp : 3109
Host timestamp : 3079
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3079 (Sun Jul 9 21:36:19 2017)
host-id=2
score=3400
vm_conf_refresh_time=3109 (Sun Jul 9 21:36:49 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 382efde5
local_conf_timestamp : 24832
Host timestamp : 24816
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24816 (Sun Jul 9 21:36:01 2017)
host-id=3
score=3400
vm_conf_refresh_time=24832 (Sun Jul 9 21:36:17 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStart
stopped=False
[root@ovirt02 ~]#
and
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3253
stopped : False
Local maintenance : False
crc32 : 3fc39f31
local_conf_timestamp : 20087
Host timestamp : 20070
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20070 (Sun Jul 9 21:36:26 2017)
host-id=1
score=3253
vm_conf_refresh_time=20087 (Sun Jul 9 21:36:43 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4a05c31e
local_conf_timestamp : 3109
Host timestamp : 3079
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3079 (Sun Jul 9 21:36:19 2017)
host-id=2
score=3400
vm_conf_refresh_time=3109 (Sun Jul 9 21:36:49 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : fc1e8cf9
local_conf_timestamp : 24868
Host timestamp : 24836
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24836 (Sun Jul 9 21:36:21 2017)
host-id=3
score=3400
vm_conf_refresh_time=24868 (Sun Jul 9 21:36:53 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
[root@ovirt02 ~]#
and at the end Host3 goes to "ForceStop" for the engine
[root@ovirt02 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 3312
stopped : False
Local maintenance : False
crc32 : e9d53432
local_conf_timestamp : 20120
Host timestamp : 20102
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20102 (Sun Jul 9 21:36:58 2017)
host-id=1
score=3312
vm_conf_refresh_time=20120 (Sun Jul 9 21:37:15 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "up", "detail": "powering up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 7d2330be
local_conf_timestamp : 3141
Host timestamp : 3124
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3124 (Sun Jul 9 21:37:04 2017)
host-id=2
score=3400
vm_conf_refresh_time=3141 (Sun Jul 9 21:37:21 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "Storage of VM is locked.
Is another host already starting the VM?", "health": "bad", "vm":
"already_locked", "detail": "down"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 179825e8
local_conf_timestamp : 24900
Host timestamp : 24883
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24883 (Sun Jul 9 21:37:08 2017)
host-id=3
score=3400
vm_conf_refresh_time=24900 (Sun Jul 9 21:37:24 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineForceStop
stopped=False
[root@ovirt02 ~]#
Comparing /var/log/libvirt/qemu/HostedEngine of host2 and host3
Host2:
2017-07-09 19:36:36.094+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.9 (CentOS BuildSystem <http://bugs.centos.org>,
2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version: 2.6.0
(qemu-kvm-ev-2.6.0-28.el7.10.1), hostname: ovirt02.localdomain.local
... char device redirected to /dev/pts/1 (label charconsole0)
warning: host doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
Host3:
2017-07-09 19:36:38.143+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.9 (CentOS BuildSystem <http://bu
gs.centos.org>, 2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version:
2.6.0 (qemu-kvm-ev-2.6.0-28.el7.10.1), hos
tname: ovirt03.localdomain.local
... char device redirected to /dev/pts/1 (label charconsole0)
2017-07-09 19:36:38.584+0000: shutting down
2017-07-09T19:36:38.589729Z qemu-kvm: terminating on signal 15 from pid 1835
any comment?
Is it only a matter of powering on the VM in paused mode before starting
the OS itself, or do I risk corruption due to 2 qemu-kvm processes trying
to start the engine vm os?
Thanks,
Gianluca
7 years, 4 months
Installation of oVirt 4.1, Gluster Storage and Hosted Engine
by Simone Marchioni
Hi to all,
I have an old installation of oVirt 3.3 with the Engine on a separate
server. I wanted to test the last oVirt 4.1 with Gluster Storage and
Hosted Engine.
Followed the following tutorial:
http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-glust...
I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the oVirt
4.1 repo and all required packages. Configured passwordless ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with
Gluster" and hit the Start button. Configured the parameters as shown in
the tutorial.
In the last step (5) the Generated Gdeply configuration (note: replaced
the real domain with "domain.it"):
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it
[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb
-h ha1.domain.it,ha2.domain.it,ha3.domain.it
[disktype]
raid6
[diskcount]
12
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
[pv1]
action=create
devices=sdb
ignore_pv_errors=no
[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB
[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB
[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB
[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data
ignore_volume_errors=no
arbiter_count=1
[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export
ignore_volume_errors=no
arbiter_count=1
[volume4]
action=create
volname=iso
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/iso/iso,ha2.domain.it:/gluster_bricks/iso/iso,ha3.domain.it:/gluster_bricks/iso/iso
ignore_volume_errors=no
arbiter_count=1
When I hit "Deploy" button the Deployment fails with the following error:
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The
conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}
to retry, use: --limit @/tmp/tmpcV3lam/run-script.retry
PLAY RECAP
*********************************************************************
ha1.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha2.domain.it : ok=0 changed=0 unreachable=0 failed=1
ha3.domain.it : ok=0 changed=0 unreachable=0 failed=1
What I'm doing wrong? Maybe I need to initializa glusterfs in some way...
What are the logs used to log the status of this deployment so I can
check the errors?
Thanks in advance!
Simone
7 years, 4 months
Problems with vdsm-tool and ovn-config option
by Gianluca Cecchi
Hello,
on February I installed OVN controller on some hypervisors (CentOS 7.3
hosts).
At that time the vdsm-tool command was part
of vdsm-python-4.19.4-1.el7.centos.noarch
I was able to configure my hosts with command
vdsm-tool ovn-config OVN_central_server_IP local_OVN_tunneling_IP
as described here:
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
and now here
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/...
I have 2 hosts that I added later and on which I want to configure OVN
external network provider too.
But now, with vdsm-python at
version vdsm-python-4.19.10.1-1.el7.centos.noarch I get an usage error
trying to execute the command
[root@ov301 ~]# vdsm-tool ovn-config 10.4.192.43 10.4.167.41
Usage: /bin/vdsm-tool [options] <action> [arguments]
Valid options:
-h, --help
Show this help menu.
-l, --logfile <path>
...
Also, the changelog of the package seems quite broken:
[root@ov301 ~]# rpm -q --changelog vdsm-python
* Wed Aug 03 2016 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.18.999
- Re-review of vdsm.spec to return it to fedora Bug #1361659
* Sun Oct 13 2013 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.13.0
- Removing vdsm-python-cpopen from the spec
- Adding dependency on formal cpopen package
* Sun Apr 07 2013 Yaniv Bronhaim <ybronhei(a)redhat.com> - 4.9.0-1
- Adding cpopen package
* Wed Oct 12 2011 Federico Simoncelli <fsimonce(a)redhat.com> - 4.9.0-0
- Initial upstream release
* Thu Nov 02 2006 Simon Grinberg <simong(a)qumranet.com> - 0.0-1
- Initial build
[root@ov301 ~]#
How can I configure OVN?
Thanks in advance,
Gianluca
7 years, 4 months