On Fri, Mar 10, 2017 at 11:54 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Fri, Mar 10, 2017 at 6:24 PM, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
> Hello,
> on my 4.1 environment sometimes I get this kind of message, typically
when I
> create a snapshot.
> Please note that the snapshot is correctly created: I get the event
between
> the snapshot creation initiated and completed events
>
>
> VDSM ovmsrv05 command TeardownImageVDS failed: Cannot deactivate Logical
> Volume: ('General Storage Exception: ("5 [] [\' WARNING: Not using
lvmetad
> because config setting use_lvmetad=0.\', \' WARNING: To avoid
corruption,
> rescan devices to make changes visible (pvscan --cache).\', \' Logical
> volume
> 922b5269-ab56-4c4d-838f-49d33427e2ab/79350ec5-eea5-458b-a3ee-ba394d2cda27
in
> use.\', \' Logical volume
> 922b5269-ab56-4c4d-838f-49d33427e2ab/3590f38b-1e3b-4170-a901-801ee5d21d59
in
> use.\']\\n922b5269-ab56-4c4d-838f-49d33427e2ab/[\'3590f38b-
1e3b-4170-a901-801ee5d21d59\',
> \'79350ec5-eea5-458b-a3ee-ba394d2cda27\']",)',)
>
> What is the exact meaning and how can I crosscheck to solve the possible
> anomaly...?
> What about lvmetad reference inside the message? Is it an option that in
> oVirt can be configured or not at all for now?
Can you share the output of:
lvs -o vg_name,lv_name,size,attr,devices
vgs -o vg_name,pv_name
After you get this error.
lvmetad is not compatible with ovirt shared storage, this is why you get
these
warnings.
It is disabled in latest 4.1.
If you don't want to wait for the release, you can disable it now:
systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
systemctl disable lvm2-lvmetad.service lvm2-lvmetad.socket
systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
Edit /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Please report if you still get errors about "Logical volume xxx/yyy in
use"
after disabling lvmetad. If you do, please file a bug.
Nir
Hello,
I didn't get chance to make the requested commands on the former FC SAN
domain.
But now I'm getting the same on an iSCSI one I'm testing...
So here the details
both engine and host in 4.1.1
I don't get the lvmetad part in error message, possibly has been removed in
4.1.1 in respect with 4.1 above?
I got this while creating a snapshot of a running VM
Apr 4, 2017 2:22:49 PM Snapshot 'test' creation for VM 'Oracle7_IMPORTED'
has been completed.
Apr 4, 2017 2:22:49 PM VDSM ov300 command TeardownImageVDS failed: Cannot
deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical
volume
94795322-92a7-4904-80c1-83d6b629d38a/564ef695-a6cf-4425-a44f-101fa6bc11e4
in use.\', \' Logical volume
94795322-92a7-4904-80c1-83d6b629d38a/0441718e-13ce-4562-b69e-74cee0972083
in
use.\']\\n94795322-92a7-4904-80c1-83d6b629d38a/[\'0441718e-13ce-4562-b69e-74cee0972083\',
\'564ef695-a6cf-4425-a44f-101fa6bc11e4\']",)',)
Apr 4, 2017 2:22:36 PM VDSM ov300 command TeardownImageVDS failed: Cannot
deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical
volume
94795322-92a7-4904-80c1-83d6b629d38a/845a9008-8ece-4cf2-9828-f286640b5f0e
in use.\', \' Logical volume
94795322-92a7-4904-80c1-83d6b629d38a/ecdc6fc6-629a-43a7-a756-48b6ee695b19
in
use.\']\\n94795322-92a7-4904-80c1-83d6b629d38a/[\'ecdc6fc6-629a-43a7-a756-48b6ee695b19\',
\'845a9008-8ece-4cf2-9828-f286640b5f0e\']",)',)
Apr 4, 2017 2:21:57 PM Snapshot 'test' creation for VM 'Oracle7_IMPORTED'
was initiated by g.cecchi@internal-authz.
[root@ov300 vdsm]# lvs -o vg_name,lv_name,size,attr,devices
VG LV
LSize Attr Devices
94795322-92a7-4904-80c1-83d6b629d38a 0441718e-13ce-4562-b69e-74cee0972083
4.00g -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(713)
94795322-92a7-4904-80c1-83d6b629d38a 564ef695-a6cf-4425-a44f-101fa6bc11e4
8.00g -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(41)
94795322-92a7-4904-80c1-83d6b629d38a 845a9008-8ece-4cf2-9828-f286640b5f0e
72.00g -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(105)
94795322-92a7-4904-80c1-83d6b629d38a c306d9e1-dcd2-47fc-9500-4e5f9ff3aee3
128.00m -wi------- /dev/mapper/364817197b5dfd0e5538d959702249b1c(39)
94795322-92a7-4904-80c1-83d6b629d38a c424dad7-ce65-47c3-a517-f3294745bc63
128.00m -wi------- /dev/mapper/364817197b5dfd0e5538d959702249b1c(40)
94795322-92a7-4904-80c1-83d6b629d38a ecdc6fc6-629a-43a7-a756-48b6ee695b19
4.00g -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(681)
94795322-92a7-4904-80c1-83d6b629d38a ids
128.00m -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(29)
94795322-92a7-4904-80c1-83d6b629d38a inbox
128.00m -wi-a----- /dev/mapper/364817197b5dfd0e5538d959702249b1c(30)
94795322-92a7-4904-80c1-83d6b629d38a leases
2.00g -wi-a----- /dev/mapper/364817197b5dfd0e5538d959702249b1c(13)
94795322-92a7-4904-80c1-83d6b629d38a master
1.00g -wi-ao---- /dev/mapper/364817197b5dfd0e5538d959702249b1c(31)
94795322-92a7-4904-80c1-83d6b629d38a metadata
512.00m -wi-a----- /dev/mapper/364817197b5dfd0e5538d959702249b1c(0)
94795322-92a7-4904-80c1-83d6b629d38a outbox
128.00m -wi-a----- /dev/mapper/364817197b5dfd0e5538d959702249b1c(4)
94795322-92a7-4904-80c1-83d6b629d38a xleases
1.00g -wi-a----- /dev/mapper/364817197b5dfd0e5538d959702249b1c(5)
cl root
119.12g -wi-ao---- /dev/sda2(0)
cl swap
16.00g -wi-ao---- /dev/sda2(30495)
[root@ov300 vdsm]#
[root@ov300 vdsm]# vgs -o vg_name,pv_name
VG PV
94795322-92a7-4904-80c1-83d6b629d38a
/dev/mapper/364817197b5dfd0e5538d959702249b1c
cl /dev/sda2
BTW: when I try to go in web admin gui to catch all the text to copy and
paste it, the only point I get is is going directly under the top level
"system", otherwise the 2 lines related to VDSM are not displayed..
is it correct, expected?
Thanks for any help
Gianluca