oVirt - vdo: ERROR - Device /dev/sd excluded by a filter
by Jeremey Wise
[image: image.png]
vdo: ERROR - Device /dev/sdc excluded by a filter
[image: image.png]
Other server
vdo: ERROR - Device
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
excluded by a filter.
All systems when I go to create VDO volume on blank drives.. I get this
filter error. All disk outside of the HCI wizard setup are now blocked
from creating new Gluster volume group.
Here is what I see in /dev/lvm/lvm.conf |grep filter
[root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
filter =
["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|",
"r|.*|"]
[root@odin ~]# ls -al /dev/disk/by-id/
total 0
drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
drwxr-xr-x. 6 root root 120 Sep 18 14:32 ..
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Sep 18 14:32
ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 11 Sep 18 16:40
dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
lrwxrwxrwx. 1 root root 10 Sep 18 16:40
dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
lrwxrwxrwx. 1 root root 11 Sep 18 16:40
dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
lrwxrwxrwx. 1 root root 10 Sep 18 23:35
dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 18 23:49
dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
-> ../../dm-4
lrwxrwxrwx. 1 root root 10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
lrwxrwxrwx. 1 root root 10 Sep 18 16:40
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT
-> ../../dm-1
lrwxrwxrwx. 1 root root 10 Sep 18 16:40
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU
-> ../../dm-2
lrwxrwxrwx. 1 root root 10 Sep 18 16:40
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r
-> ../../dm-0
lrwxrwxrwx. 1 root root 10 Sep 18 16:40
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9
-> ../../dm-6
lrwxrwxrwx. 1 root root 11 Sep 18 16:40
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L
-> ../../dm-11
lrwxrwxrwx. 1 root root 11 Sep 18 16:40
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz
-> ../../dm-12
lrwxrwxrwx. 1 root root 10 Sep 18 23:35
dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 18 23:49
dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../dm-4
lrwxrwxrwx. 1 root root 10 Sep 18 14:32
dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
lrwxrwxrwx. 1 root root 10 Sep 18 14:32
lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
lrwxrwxrwx. 1 root root 13 Sep 18 14:32
nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../nvme0n1
lrwxrwxrwx. 1 root root 15 Sep 18 14:32
nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
-> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 13 Sep 18 14:32
nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458 -> ../../nvme0n1
lrwxrwxrwx. 1 root root 15 Sep 18 14:32
nvme-SPCC_M.2_PCIe_SSD_AA000000000000002458-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Sep 18 14:32
scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-0ATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-1ATA_INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Sep 18 14:32
scsi-1ATA_Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-1ATA_WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-35001b448b9608d90 ->
../../sdc
lrwxrwxrwx. 1 root root 9 Sep 18 14:32 scsi-3500a07511f699137 ->
../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40 scsi-355cd2e404b581cc0 ->
../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part1 ->
../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40 scsi-355cd2e404b581cc0-part2 ->
../../sda2
lrwxrwxrwx. 1 root root 10 Sep 18 23:35
scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 18 23:49
scsi-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
-> ../../dm-4
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40
scsi-SATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Sep 18 14:32
scsi-SATA_Micron_1100_MTFD_17401F699137 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40
scsi-SATA_WDC_WDS100T2B0B-_183533804564 -> ../../sdc
lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x5001b448b9608d90 ->
../../sdc
lrwxrwxrwx. 1 root root 9 Sep 18 14:32 wwn-0x500a07511f699137 ->
../../sdb
lrwxrwxrwx. 1 root root 9 Sep 18 22:40 wwn-0x55cd2e404b581cc0 ->
../../sda
lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part1 ->
../../sda1
lrwxrwxrwx. 1 root root 10 Sep 18 22:40 wwn-0x55cd2e404b581cc0-part2 ->
../../sda2
lrwxrwxrwx. 1 root root 10 Sep 18 23:35
wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
-> ../../dm-3
lrwxrwxrwx. 1 root root 10 Sep 18 23:49
wwn-0xvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
-> ../../dm-4
lrwxrwxrwx. 1 root root 15 Sep 18 14:32
wwn-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001-part1
-> ../../nvme0n1p1
[root@odin ~]# ls -al /dev/disk/by-id/
So filter notes three objects:
lvm-pv-uuid-e1fvwo.... -> dm-5 ->vdo_sdb (used by HCI for all the three
gluster base volumes )
lvm-pv-uuid-mr9awW... -> sda2 -> boot volume
[root@odin ~]# lsblk
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda
8:0 0 74.5G 0 disk
├─sda1
8:1 0 1G 0 part /boot
└─sda2
8:2 0 73.5G 0 part
├─cl-root
253:0 0 44.4G 0 lvm /
├─cl-swap
253:1 0 7.5G 0 lvm [SWAP]
└─cl-home
253:2 0 21.7G 0 lvm /home
sdb
8:16 0 477G 0 disk
└─vdo_sdb
253:5 0 2.1T 0 vdo
├─gluster_vg_sdb-gluster_lv_engine
253:6 0 100G 0 lvm /gluster_bricks/engine
├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta
253:7 0 1G 0 lvm
│ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool
253:9 0 2T 0 lvm
│ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb
253:10 0 2T 1 lvm
│ ├─gluster_vg_sdb-gluster_lv_data
253:11 0 1000G 0 lvm /gluster_bricks/data
│ └─gluster_vg_sdb-gluster_lv_vmstore
253:12 0 1000G 0 lvm /gluster_bricks/vmstore
└─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata
253:8 0 2T 0 lvm
└─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool
253:9 0 2T 0 lvm
├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb
253:10 0 2T 1 lvm
├─gluster_vg_sdb-gluster_lv_data
253:11 0 1000G 0 lvm /gluster_bricks/data
└─gluster_vg_sdb-gluster_lv_vmstore
253:12 0 1000G 0 lvm /gluster_bricks/vmstore
sdc
8:32 0 931.5G 0 disk
nvme0n1
259:0 0 953.9G 0 disk
├─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
253:3 0 953.9G 0 mpath
│
└─nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001p1
253:4 0 953.9G 0 part
└─nvme0n1p1
So I don't think this is LVM filtering things
Multipath showing weird treatment of the NVMe drive.. but that is outside
this converstation
[root@odin ~]# multipath -l
nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-00000001
dm-3 NVME,SPCC M.2 PCIe SSD
size=954G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 0:1:1:1 nvme0n1 259:0 active undef running
[root@odin ~]#
Where is getting this filter.
I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition. I even
did a full dd if=/dev/zero and no change.
I reloaded OS on system to get through wizard setup. Now that all three
nodes are in the HCI cluster.. all six drives (2 x 1TB in each server) are
now locked from any use due to this error about filter.
Ideas?
--
jeremey.wise(a)gmail.com
4 years, 2 months
Upgrade Ovirt from 4.2 to 4.4 on CentOS7.4
by tiziano.pacioni@par-tec.it
Hi everyone,
I am writing for support regarding the ovirt upgrade.
I am using Ovirt with version 4.2 on CentOS 7.4 operating system.
The latest release of the Ovirt engine is 4.4 which is available for CentOS 8.Can I upgrade without upgrading the operating system to centos8?
I would not be wrong but it is not possible to switch from Centos7 to Centos8 .....Can anyone give me some advice?Thank you all!!!
4 years, 2 months
VM stuck in "reboot in progress" ("virtual machine XXX should be running in a host but it isn't.").
by Gilboa Davara
Hello all (and happy new year),
(Note: Also reported as https://bugzilla.redhat.com/show_bug.cgi?id=1880251)
Self hosted engine, single node, NFS.
Attempted to install CentOS over an existing Fedora VM with one host
device (USB printer).
Reboot failed, trying to boot from a non-existent CDROM.
Tried shutting the VM down, failed.
Tried powering off the VM, failed.
Dropped cluster to global maintenance, reboot host + engine (was
planning to upgrade it anyhow...), VM still stuck.
When trying to power off the VM, the following message can be found
the in engine.log:
2020-09-18 07:58:51,439+03 INFO
[org.ovirt.engine.core.bll.StopVmCommand]
(EE-ManagedThreadFactory-engine-Thread-42)
[7bc4ac71-f0b2-4af7-b081-100dc99b6123] Running command: StopVmCommand
internal: false. Entities affected : ID:
b411e573-bcda-4689-b61f-1811c6f03ad5 Type: VMAction group STOP_VM with
role type USER
2020-09-18 07:58:51,441+03 WARN
[org.ovirt.engine.core.bll.StopVmCommand]
(EE-ManagedThreadFactory-engine-Thread-42)
[7bc4ac71-f0b2-4af7-b081-100dc99b6123] Strange, according to the
status 'RebootInProgress' virtual machine
'b411e573-bcda-4689-b61f-1811c6f03ad5' should be running in a host but
it isn't.
2020-09-18 07:58:51,594+03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-42)
[7bc4ac71-f0b2-4af7-b081-100dc99b6123] EVENT_ID:
USER_FAILED_STOP_VM(56), Failed to power off VM kids-home-srv (Host:
<UNKNOWN>, User: gilboa@internal-authz).
My question is simple: Pending a solution to the bug, can I somehow
drop the state of the VM? It's currently holding a sizable disk image
and a USB device I need (printer).
As it's my private VM cluster, I have no problem dropping the site
completely for maintenance.
Thanks,
Gilboa
4 years, 2 months
Cannot import VM disks from previously detached storage domain
by Strahil Nikolov
Hello All,
I would like to ask how to proceed further.
Here is what I have done so far on my ovirt 4.3.10:
1. Set in maintenance and detached my Gluster-based storage domain
2. Did some maintenance on the gluster
3. Reattached and activated my Gluster-based storage domain
4. I have imported my ISOs via the Disk Import tab in UI
Next I tried to import the VM Disks , but they are unavailable in the disk tab
So I tried to import the VM:
1. First try - import with partial -> failed due to MAC conflict
2. Second try - import with partial , allow MAC reassignment -> failed as VM id exists -> recommends to remove the original VM
3. I tried to detach the VMs disks , so I can delete it - but this is not possible as the Vm already got a snapshot.
What is the proper way to import my non-OS disks (data domain is slower but has more space which is more suitable for "data") ?
Best Regards,
Strahil Nikolov
4 years, 2 months
Fail install SHE ovirt-engine from backupfile (4.3 -> 4.4)
by francesco@shellrent.com
Hi Everyone,
In a test environment I'm trying to deploy a single node self hosted engine 4.4 on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a local NFS;
- node2 with CentOS8, where we are triyng to deploy the engine starting from the node1 engine backup
- host1, with CentOS78, running a couple of VMs (4.3)
I'm following the guide: https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_...
Everything seems working fine, the engine on the node1 is in maintenance:global mode and the ovirt-engine service i stopped. The deploy on the node2 stucks in the following error:
TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]
[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': "vdsm-client: Command Image.prepare with args {'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db',
'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:\n(code=309, message=Unknown pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 'start': '2020-09-21 17:14:17.293090', 'end': '2020-09-21 17:14:17.644253', 'delta': '0:00:00.351163', 'changed': True, 'failed': True, 'invocation': {'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable
': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["vdsm-client: Command Image.prepare with args {'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db', 'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:", "(code=309, message=Unknown pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: This does not look like a tar archive', 'tar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive', 'tar: Exiting with failure status due to previous errors'], '_ansible_no_log': False, 'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id': '06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}, 'ansible_loop_var': 'item', '_ansible_item_label': {'name': 'OVF_STORE', 'image_id': '06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}}
[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 imageID=750428bd-1273-467f-9b27-7f6fe58a446c volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': "vdsm-client: Command Image.prepare with args {'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': '750428bd-1273-467f-9b27-7f6fe58a446c',
'volumeID': '1c89c678-f883-4e61-945c-5f7321add343'} failed:\n(code=309, message=Unknown pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 'start': '2020-09-21 17:16:26.030343', 'end': '2020-09-21 17:16:26.381862', 'delta': '0:00:00.351519', 'changed': True, 'failed': True, 'invocation': {'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 imageID=750428bd-1273-467f-9b27-7f6fe58a446c volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable
': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["vdsm-client: Command Image.prepare with args {'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': '750428bd-1273-467f-9b27-7f6fe58a446c', 'volumeID': '1c89c678-f883-4e61-945c-5f7321add343'} failed:", "(code=309, message=Unknown pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: This does not look like a tar archive', 'tar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive', 'tar: Exiting with failure status due to previous errors'], '_ansible_no_log': False, 'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id': '1c89c678-f883-4e61-945c-5f7321add343', 'id': '750428bd-1273-467f-9b27-7f6fe58a446c'}, 'ansible_loop_var': 'item', '_ansible_item_label': {'name': 'OVF_STORE', 'image_id': '1c89c678-f883-4e61-945c-5f7321add343', 'id': '750428bd-1273-467f-9b27-7f6fe58a446c'}}
For the "domain data" steps, I created the same folder "/data" on the node2 server that should be used as NFS domain data (and, the playbook added correctly the storage):
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Please specify the nfs version you would like to use (auto, v3, v4, v4_0, v4_1, v4_2)[auto]:
Please specify the full shared storage connection path to use (example: host:/path): node2-server.tld:/data
If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the storage interface to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch cluster ID] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter facts] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter ID] [ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]
[ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Add iSCSI storage domain] [ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add Fibre Channel storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Get storage domain details] [ INFO ] ok: [localhost]
4 years, 2 months
How to discover why a VM is getting suspended without recovery possibility?
by Vinícius Ferrão
Hello,
I’m an Exchange Server VM that’s going down to suspend without possibility of recovery. I need to click on shutdown and them power on. I can’t find anything useful on the logs, except on “dmesg” of the host:
[47807.747606] *** Guest State ***
[47807.747633] CR0: actual=0x0000000000050032, shadow=0x0000000000050032, gh_mask=fffffffffffffff7
[47807.747671] CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=fffffffffffff871
[47807.747721] CR3 = 0x00000000001ad002
[47807.747739] RSP = 0xffffc20904fa3770 RIP = 0x0000000000008000
[47807.747766] RFLAGS=0x00000002 DR7 = 0x0000000000000400
[47807.747792] Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
[47807.747821] CS: sel=0x9100, attr=0x08093, limit=0xffffffff, base=0x000000007ff91000
[47807.747855] DS: sel=0x0000, attr=0x08093, limit=0xffffffff, base=0x0000000000000000
[47807.747889] SS: sel=0x0000, attr=0x08093, limit=0xffffffff, base=0x0000000000000000
[47807.747923] ES: sel=0x0000, attr=0x08093, limit=0xffffffff, base=0x0000000000000000
[47807.747957] FS: sel=0x0000, attr=0x08093, limit=0xffffffff, base=0x0000000000000000
[47807.747991] GS: sel=0x0000, attr=0x08093, limit=0xffffffff, base=0x0000000000000000
[47807.748025] GDTR: limit=0x00000057, base=0xffff80817e7d5fb0
[47807.748059] LDTR: sel=0x0000, attr=0x10000, limit=0x000fffff, base=0x0000000000000000
[47807.748093] IDTR: limit=0x00000000, base=0x0000000000000000
[47807.748128] TR: sel=0x0040, attr=0x0008b, limit=0x00000067, base=0xffff80817e7d4000
[47807.748162] EFER = 0x0000000000000000 PAT = 0x0007010600070106
[47807.748189] DebugCtl = 0x0000000000000000 DebugExceptions = 0x0000000000000000
[47807.748221] Interruptibility = 00000009 ActivityState = 00000000
[47807.748248] *** Host State ***
[47807.748263] RIP = 0xffffffffc0c65024 RSP = 0xffff9252bda5fc90
[47807.748290] CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040
[47807.748318] FSBase=00007f46d462a700 GSBase=ffff9252ffac0000 TRBase=ffff9252ffac4000
[47807.748351] GDTBase=ffff9252ffacc000 IDTBase=ffffffffff528000
[47807.748377] CR0=0000000080050033 CR3=000000105ac8c000 CR4=00000000001627e0
[47807.748407] Sysenter RSP=0000000000000000 CS:RIP=0010:ffffffff8f196cc0
[47807.748435] EFER = 0x0000000000000d01 PAT = 0x0007050600070106
[47807.748461] *** Control State ***
[47807.748478] PinBased=0000003f CPUBased=b6a1edfa SecondaryExec=00000ceb
[47807.748507] EntryControls=0000d1ff ExitControls=002fefff
[47807.748531] ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000
[47807.748561] VMEntry: intr_info=00000000 errcode=00000006 ilen=00000000
[47807.748589] VMExit: intr_info=00000000 errcode=00000000 ilen=00000001
[47807.748618] reason=80000021 qualification=0000000000000000
[47807.748645] IDTVectoring: info=00000000 errcode=00000000
[47807.748669] TSC Offset = 0xfffff9b8c8d943b6
[47807.748699] TPR Threshold = 0x00
[47807.748715] EPT pointer = 0x000000105cd5601e
[47807.748735] PLE Gap=00000080 Window=00001000
[47807.748755] Virtual processor ID = 0x0003
So something really went crazy. The VM is going down at least two times a day for the last 5 days.
At first I thought it would be an hardware issue, so I restarted the VM on other host, and the same thing happened.
About the VM it’s configured with 10 CPUs, 48GB of RAM running oVirt 4.3.10 with iSCSI storage to a FreeNAS box, where the VM disks are running; there are a 300GB disc for C:\ and 2TB disk for D:\.
Any ideia on how to start troubleshooting it?
Thanks,
4 years, 2 months
hosted engine migration
by 董青龙
Hi all,
I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
4 years, 2 months
in ovirt-engine all host show the status at "non-responsive"
by momokch@yahoo.com.hk
hello everyone,
I apologize for asking what is probably a very basic question.
i have 4 hosts running on the ovirt-engine, but on the last week the alert show all the hosts status has gone to non responsive like the following statement
Host 1 became non responsive. It has no power management configured. Please check the host status, manually reboot it, and click "Confirm Host Has Been Rebooted"
all the storage, vm, cannot detect on the ovirt-engine, does anyone face this before?
and also the alert also show ""Engine's certification has expired at 2020-08-18. Please renew the engine's certification""
is it related to the hosts become non-responsive???
i have already regenerate the ovirt-engine cert from the previous post:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/ZI5WNU6OB6FZ...
Yours Sincerely,
school it
4 years, 2 months