
Hmm, try: pvscan /dev/vdc Use fdisk to see if partitions can be read. Also, when you copied the data with dd, did you use the oflag=fsync option? If not, your data could be cached in memory waiting to be flushed to disk even though the dd command completed. Try the copy again with oflag=fsync. On Fri, Jun 2, 2017 at 8:31 PM, Николаев Алексей < alexeynikolaev.post@yandex.ru> wrote:
02.06.2017, 16:09, "Adam Litke" <alitke@redhat.com>:
hmm, strange. So all three of those missing volumes have associated LVs. Can you try to activate them and see if you can read from them?
lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba- dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe dd if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe of=/dev/null bs=1M count=1
# lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba- dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
# dd if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe of=/dev/null bs=1M count=1
1+0 записей получено
1+0 записей отправлено
скопировано 1048576 байт (1,0 MB), 0,00918524 c, 114 MB/c
# vchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba- dca6ffb7b67e/03917876-0e28-4457-bf44-53c7ea2b4d12 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! # dd if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/03917876-0e28-4457-bf44-53c7ea2b4d12 of=/dev/null bs=1M count=1 1+0 записей получено 1+0 записей отправлено скопировано 1048576 байт (1,0 MB), 0,00440839 c, 238 MB/c
# lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba- dca6ffb7b67e/fd8822ee-4fc9-49ba-9760-87a85d56bf91 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! # dd if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/fd8822ee-4fc9-49ba-9760-87a85d56bf91 of=/dev/null bs=1M count=1 1+0 записей получено 1+0 записей отправлено скопировано 1048576 байт (1,0 MB), 0,00448938 c, 234 MB/c
Well, read operation is OK.
If this works, then one way to recover the data is to use the UI to create new disks of the same size as the old ones. Then, activate the LVs associated with the old volumes and the new ones. Then use dd (or qemu-img convert) to copy from old to new. Then attach the new disks to your VM.
I have create a new disk in the UI and activate it.
6050885b-5dd5-476c-b907-4ce2b3f37b0a : {"DiskAlias":"r13-sed-app_ Disk1-recovery","DiskDescription":""}. # lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba- dca6ffb7b67e/6050885b-5dd5-476c-b907-4ce2b3f37b0a
Copy by dd from old DiskAlias:r13-sed-app_Disk1 to new DiskAlias:r13-sed-app_Disk1-recovery
# dd if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe of=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/6050885b-5dd5-476c-b907-4ce2b3f37b0a status=progress скопировано 18215707136 байт (18 GB), 496,661644 s, 36,7 MB/ss 35651584+0 записей получено 35651584+0 записей отправлено скопировано 18253611008 <(825)%20361-1008> байт (18 GB), 502,111 c, 36,4 MB/c
Add new disk to existing VM (vdc). But I can't see LVM volumes on this disk. Where can I be wrong?
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 50G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 49,8G 0 part ├─vg_r34seddb-LogVol03 (dm-0) 253:0 0 26,8G 0 lvm / ├─vg_r34seddb-LogVol02 (dm-1) 253:1 0 8G 0 lvm [SWAP] ├─vg_r34seddb-LogVol01 (dm-3) 253:3 0 5G 0 lvm /tmp └─vg_r34seddb-LogVol00 (dm-4) 253:4 0 10G 0 lvm /var vdb 252:16 0 1000G 0 disk └─vdb1 252:17 0 1000G 0 part └─vg_r34seddb00-LogVol00 (dm-2) 253:2 0 1000G 0 lvm /var/lib/pgsql vdc 252:32 0 50G 0 disk
On Thu, Jun 1, 2017 at 6:44 PM, Николаев Алексей < alexeynikolaev.post@yandex.ru> wrote:
Thx for your help!
01.06.2017, 16:46, "Adam Litke" <alitke@redhat.com>:
When you say "not visible in oVirt" you mean that you do not see them in the UI?
Yes, I can see some VM with prefix "external-" and without disks.
Do you know the specific uuids for the missing volumes? You could use lvm to check if the LVs are visible to the host.
lvs --config 'global {use_lvmetad=0}' -o +tags
For each LV, the tag beginning with IU_ indicates the image the volume belongs to.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LV Tags
03917876-0e28-4457-bf44-53c7ea2b4d12 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 117,00g IU_3edd1f60-fd43-4e12-9615-b12bcd1a17ab,MD_6,PU_00000000- 0000-0000-0000-000000000000 309a325a-6f13-4a24-b204-e825ddbf0e41 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-ao---- 300,00g IU_9cbde83c-7d1d-4b78-bc7a-6d540584c888,MD_17,PU_00000000-0000-0000-0000- 000000000000 3b089aed-b3e1-4423-8585-e65752d19ffe 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 17,00g IU_986849a8-04ea-4b7d-a29f-f023da9020b3,MD_5,PU_00000000- 0000-0000-0000-000000000000 48b6f4f5-9eeb-4cf3-922a-c9c0c2239bee 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 128,00m IU_caed7169-0c90-467f-86d3-a82148b1f0af,MD_10,PU_00000000-0000-0000-0000- 000000000000 7700e1e1-351b-49b5-8681-e121bbf67177 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 83,00g IU_9cf3a067-2996-4ff2-b481-b13f7cc73c33,MD_7,PU_00000000- 0000-0000-0000-000000000000 91a7aa8c-4e13-477c-8703-d8224b85bc84 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-ao---- 1000,00g IU_fd2ecb98-a8c5-4aff-a6d3-7ac3087ab994,MD_15,PU_00000000-0000-0000-0000- 000000000000 a3f29de7-c6b9-410e-b635-9b3016da7ba2 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 7,50g IU_71f18680-1b93-4a2a-9bd1-95baeccf2d89,MD_11,PU_00000000-0000-0000-0000- 000000000000 ba26f8b7-b807-4a6c-a840-6a83e8ec526e 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 8,00g IU_e204e228-fa35-4457-a31d-3b7964156538,MD_8,PU_00000000-000 0-0000-0000-000000000000 ba29b8fd-3618-42d6-a70f-17d883fde0ed 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 50,00g IU_c46d5cbb-1802-4086-a31a-5b3f0b874454,MD_16,PU_00000000-0000-0000-0000- 000000000000 ba92b801-3619-4753-9e06-3e2028a408cb 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 128,00m IU_0d0983f0-60d7-4ce0-bf1b-12cfc456acd8,MD_9,PU_00000000- 0000-0000-0000-000000000000 c0bcf836-c648-4421-9340-febcc0e0abfe 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 50,00g IU_4fe1b047-60b2-4bc3-b73b-2fad8a81cc02,MD_13,PU_00000000-0000-0000-0000- 000000000000 c1d08688-aad1-478e-82a3-e5b5fde85706 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-ao---- 1000,00g IU_f58cd721-d8b8-4794-b059-8789a9fecf62,MD_18,PU_00000000-0000-0000-0000- 000000000000 c7a4782f-cde6-40db-a625-810fd2856dfa 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 300,00g IU_8917cb1a-8bd4-4386-a95a-0189a04866ad,MD_19,PU_00000000-0000-0000-0000- 000000000000 cb36e25a-b9b2-4f54-9b03-a10837bc26ab 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 50,00g IU_e8720901-0383-4266-98a7-fb5b9fb27e52,MD_14,PU_00000000-0000-0000-0000- 000000000000 f60ccaa4-663b-4a39-8ad0-2ed3fb208cb0 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 50,00g IU_f0d441bb-2bd2-4523-ab59-beef544727b5,MD_12,PU_00000000-0000-0000-0000- 000000000000 fd8822ee-4fc9-49ba-9760-87a85d56bf91 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi------- 413,00g IU_7be49698-f3a5-4995-b411-f0490a819950,MD_4,PU_00000000- 0000-0000-0000-000000000000 ids 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-ao---- 128,00m
inbox 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-a----- 128,00m
leases 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-a----- 2,00g
master 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-a----- 1,00g
metadata 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-a----- 512,00m
outbox 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e -wi-a----- 128,00m
root centos_node13-05 -wi-ao---- 101,49g
swap centos_node13-05 -wi-ao---- 9,77g
lv-filestore vg-filestore -wi-a----- 1000,00g
lv-opt vg-opt -wi-a----- 300,00g
lv-postgres vg-postgres -wi-a----- 1000,00g
You could also try to use the vdsm commands:
I can only use vdsClient at 3.5....
# Get the storage pool id sudo vdsm-client Host getConnectedStoragePools
dsClient -s 0 getConnectedStoragePoolsList
00000002-0002-0002-0002-00000000009b
# Get a list of storagedomain IDs sudo vdsm-client Host getStorageDomains # gives you a list of storagedomainIDs
vdsClient -s 0 getStorageDomainsList
0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e
# Get a list of image ids on a domain sudo vdsm-client StorageDomain getImages sdUUID=<domain id>
vdsClient -s 0 getImagesList 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e
c46d5cbb-1802-4086-a31a-5b3f0b874454 f0d441bb-2bd2-4523-ab59-beef544727b5 4fe1b047-60b2-4bc3-b73b-2fad8a81cc02 0d0983f0-60d7-4ce0-bf1b-12cfc456acd8 71f18680-1b93-4a2a-9bd1-95baeccf2d89 caed7169-0c90-467f-86d3-a82148b1f0af 986849a8-04ea-4b7d-a29f-f023da9020b3 9cf3a067-2996-4ff2-b481-b13f7cc73c33 3edd1f60-fd43-4e12-9615-b12bcd1a17ab e8720901-0383-4266-98a7-fb5b9fb27e52 fd2ecb98-a8c5-4aff-a6d3-7ac3087ab994 e204e228-fa35-4457-a31d-3b7964156538 f58cd721-d8b8-4794-b059-8789a9fecf62 8917cb1a-8bd4-4386-a95a-0189a04866ad 9cbde83c-7d1d-4b78-bc7a-6d540584c888 7be49698-f3a5-4995-b411-f0490a819950
# Get a list of volume ids in an image sudo vdsm-client Image getVolumes imageID=<image> storagepoolID=<pool> storagedomainID=<domain>
vdsClient -s 0 getVolumesList 0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e 00000002-0002-0002-0002-00000000009b
ERROR: ba29b8fd-3618-42d6-a70f-17d883fde0ed : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/c46d5cbb-1802-4086-a31a-5b3f0b874454',)", 'code': 254}} ERROR: f60ccaa4-663b-4a39-8ad0-2ed3fb208cb0 : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/f0d441bb-2bd2-4523-ab59-beef544727b5',)", 'code': 254}} ERROR: c0bcf836-c648-4421-9340-febcc0e0abfe : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/4fe1b047-60b2-4bc3-b73b-2fad8a81cc02',)", 'code': 254}} ba92b801-3619-4753-9e06-3e2028a408cb : {"Updated":true,"Disk Description":"OVF_STORE","Storage Domains":[{"uuid":"0fcd2921- 8a55-4ff7-9cba-dca6ffb7b67e"}],"Last Updated":"Tue May 30 11:50:06 MSK 2017","Size":61440}. ERROR: a3f29de7-c6b9-410e-b635-9b3016da7ba2 : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/71f18680-1b93-4a2a-9bd1-95baeccf2d89',)", 'code': 254}} 48b6f4f5-9eeb-4cf3-922a-c9c0c2239bee : {"Updated":true,"Disk Description":"OVF_STORE","Storage Domains":[{"uuid":"0fcd2921- 8a55-4ff7-9cba-dca6ffb7b67e"}],"Last Updated":"Tue May 30 11:50:06 MSK 2017","Size":61440}. 3b089aed-b3e1-4423-8585-e65752d19ffe : {"DiskAlias":"r13-sed-app_ Disk1","DiskDescription":""}. 7700e1e1-351b-49b5-8681-e121bbf67177 : {"DiskAlias":"r13-sed-db_ Disk2","DiskDescription":""}. 03917876-0e28-4457-bf44-53c7ea2b4d12 : {"DiskAlias":"r13-sed-app_ Disk2","DiskDescription":""}. ERROR: cb36e25a-b9b2-4f54-9b03-a10837bc26ab : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/e8720901-0383-4266-98a7-fb5b9fb27e52',)", 'code': 254}} ERROR: 91a7aa8c-4e13-477c-8703-d8224b85bc84 : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/fd2ecb98-a8c5-4aff-a6d3-7ac3087ab994',)", 'code': 254}} ba26f8b7-b807-4a6c-a840-6a83e8ec526e : {"DiskAlias":"r13-sed-db_ Disk1","DiskDescription":""}. ERROR: c1d08688-aad1-478e-82a3-e5b5fde85706 : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/f58cd721-d8b8-4794-b059-8789a9fecf62',)", 'code': 254}} ERROR: c7a4782f-cde6-40db-a625-810fd2856dfa : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/8917cb1a-8bd4-4386-a95a-0189a04866ad',)", 'code': 254}} ERROR: 309a325a-6f13-4a24-b204-e825ddbf0e41 : {'status': {'message': "Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/0fcd2921-8a55-4ff7- 9cba-dca6ffb7b67e/images/9cbde83c-7d1d-4b78-bc7a-6d540584c888',)", 'code': 254}} fd8822ee-4fc9-49ba-9760-87a85d56bf91 : {"DiskAlias":"r13-sed-app_ Disk3","DiskDescription":""}.
If the wanted images are present on the host but not in engine then you had a problem with import. If the images are not visible to the host then you probably have some storage connection issue.
This disks are presented in oVirt and VM works OK.
ba26f8b7-b807-4a6c-a840-6a83e8ec526e : {"DiskAlias":"r13-sed-db_ Disk1","DiskDescription":""}. 7700e1e1-351b-49b5-8681-e121bbf67177 : {"DiskAlias":"r13-sed-db_ Disk2","DiskDescription":""}.
I can see another disks that NOT presented in oVirt UI.
3b089aed-b3e1-4423-8585-e65752d19ffe : {"DiskAlias":"r13-sed-app_ Disk1","DiskDescription":""}. 03917876-0e28-4457-bf44-53c7ea2b4d12 : {"DiskAlias":"r13-sed-app_ Disk2","DiskDescription":""}. fd8822ee-4fc9-49ba-9760-87a85d56bf91 : {"DiskAlias":"r13-sed-app_ Disk3","DiskDescription":""}.
Is any way to manage/copy this disks?
Another disks with ERROR code are fully corrupted?
On Wed, May 31, 2017 at 6:11 PM, Николаев Алексей < alexeynikolaev.post@yandex.ru> wrote:
Please advise, where I can find some materials how oVirt (or maybe VDSM) works with block storage, multipath?
30.05.2017, 11:57, "Николаев Алексей" <alexeynikolaev.post@yandex.ru>:
Hi, community!
After import DATA DOMAIN some VM disks not visible in oVirt. Can I manually copy VM disks from data domain at FC block device? ,
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke
-- Adam Litke
-- Adam Litke