[ovirt-users] [Qemu-block] qcow2 images corruption
Nicolas Ecarnot
nicolas at ecarnot.net
Tue Feb 13 15:26:40 UTC 2018
Hello Kevin,
Le 13/02/2018 à 10:41, Kevin Wolf a écrit :
> Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben:
>> TL; DR : qcow2 images keep getting corrupted. Any workaround?
>
> Not without knowing the cause.
Actually, my main concern is mostly about finding the cause rather than
correcting my corrupted VMs.
Another way to say it : I prefer to help oVirt than help myself.
> The first thing to make sure is that the image isn't touched by a second
> process while QEMU is running a VM.
Indeed, I read some BZ about this issue : they were raised by a user who
ran some qemu-img commands on a "mounted" image, thus leading to some
corruption.
In my case, I'm not playing with this, and the corrupted VMs were only
touched by classical oVirt actions.
> The classic one is using 'qemu-img
> snapshot' on the image of a running VM, which is instant corruption (and
> newer QEMU versions have locking in place to prevent this), but we have
> seen more absurd cases of things outside QEMU tampering with the image
> when we were investigating previous corruption reports.
>
> This covers the majority of all reports, we haven't had a real
> corruption caused by a QEMU bug in ages.
May I ask after what QEMU version this kind of locking has been added.
As I wrote, our oVirt setup is 3.6 so not recent.
>
>> After having found (https://access.redhat.com/solutions/1173623) the right
>> logical volume hosting the qcow2 image, I can run qemu-img check on it.
>> - On 80% of my VMs, I find no errors.
>> - On 15% of them, I find Leaked cluster errors that I can correct using
>> "qemu-img check -r all"
>> - On 5% of them, I find Leaked clusters errors and further fatal errors,
>> which can not be corrected with qemu-img.
>> In rare cases, qemu-img can correct them, but destroys large parts of the
>> image (becomes unusable), and on other cases it can not correct them at all.
>
> It would be good if you could make the 'qemu-img check' output available
> somewhere.
See attachment.
>
> It would be even better if we could have a look at the respective image.
> I seem to remember that John (CCed) had a few scripts to analyse
> corrupted qcow2 images, maybe we would be able to see something there.
I just exported it like this :
qemu-img convert /dev/the_correct_path /home/blablah.qcow2.img
The resulting file is 32G and I need an idea to transfer this img to you.
>
>> What I read similar to my case is :
>> - usage of qcow2
>> - heavy disk I/O
>> - using the virtio-blk driver
>>
>> In the proxmox thread, they tend to say that using virtio-scsi is the
>> solution. Having asked this question to oVirt experts
>> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's
>> not clear the driver is to blame.
>
> This seems very unlikely. The corruption you're seeing is in the qcow2
> metadata, not only in the guest data.
Are you saying:
- the corruption is in the metadata and in the guest data
OR
- the corruption is only in the metadata
?
> If anything, virtio-scsi exercises
> more qcow2 code paths than virtio-blk, so any potential bug that affects
> virtio-blk should also affect virtio-scsi, but not the other way around.
I get that.
>
>> I agree with the answer Yaniv Kaul gave to me, saying I have to properly
>> report the issue, so I'm longing to know which peculiar information I can
>> give you now.
>
> To be honest, debugging corruption after the fact is pretty hard. We'd
> need the 'qemu-img check' output
Done.
> and ideally the image to do anything,
I remember some Redhat people once gave me a temporary access to put
heavy file on some dedicated server. Is it still possible?
> but I can't promise that anything would come out of this.
>
> Best would be a reproducer, or at least some operation that you can link
> to the appearance of the corruption. Then we could take a more targeted
> look at the respective code.
Sure.
Alas I find no obvious pattern leading to corruption :
From the guest side, it appeared with windows 2003, 2008, 2012, linux
centOS 6 and 7. It appeared with virtio-blk; and I changed some VMs to
used virtio-scsi but it's too soon to see appearance of corruption in
that case.
As I said, I'm using snapshots VERY rarely, and our versions are too old
so we do them the cold way only (VM shutdown). So very safely.
The "weirdest" thing we do is to migrate VMs : you see how conservative
we are!
>> As you can imagine, all this setup is in production, and for most of the
>> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly
>> stopping every VM, qemu-img check them one by one, then boot.
>> So it might take some time before I find another corrupted image.
>> (which I'll preciously store for debug)
>>
>> Other informations : We very rarely do snapshots, but I'm close to imagine
>> that automated migrations of VMs could trigger similar behaviors on qcow2
>> images.
>
> To my knowledge, oVirt only uses external snapshots and creates them
> with QMP. This should be perfectly safe because from the perspective of
> the qcow2 image being snapshotted, it just means that it gets no new
> write requests.
>
> Migration is something more involved, and if you could relate the
> problem to migration, that would certainly be something to look into. In
> that case, it would be important to know more about the setup, e.g. is
> it migration with shared or non-shared storage?
I'm 99% sure the corrupted VMs have never see a snapshot, and 99% sure
they have been migrated at most once.
For me *this* is the track to follow.
We have 2 main 3.6 oVirt DCs each having 4 dedicated LUNs, connected via
iSCSI. Two SANs are serving those volumes. These are Equallogic and the
setup of each volume contains a check saying :
Access type : "Shared"
http://psonlinehelp.equallogic.com/V5.0/Content/V5TOC/Allowing_or_disallowing_multi_ho.htm
(shared access to the iSCSI target from multiple initiators)
To be honest, I've never been comfortable with this point:
- In a complete different context, I'm using it to allow two files
servers to publish an OCFS2 volume embedded in a clustered-LVM. It is
absolutely reliable as *c*LVM and OCFS2 are explicitly written to manage
concurrent access.
- In the case of oVirt, we are here allowing tens of hosts to connect to
the same LUN. This LUN is then managed by a classical LVM setup, but I
see here no notion of concurrent access management. To date, I still
haven't understood how was managed these concurrent access to the same
LUN with no crash.
I hope I won't find no skeletons in the closet.
>> Last point about the versions we use : yes that's old, yes we're planning to
>> upgrade, but we don't know when.
>
> That would be helpful, too. Nothing is more frustrating that debugging a
> bug in an old version only to find that it's already fixed in the
> current version (well, except maybe debugging and finding nothing).
>
> Kevin
Exact, but as I wrote to Yaniv, it would be sad to setup a brand new 4.2
DC and to face the bad old issues.
For the record, I just finished to setup another 4.2 DC, but it'll be
long before I could apply to it a similar workload as the 3.6 production
site.
--
Nicolas ECARNOT
-------------- next part --------------
A non-text attachment was scrubbed...
Name: qemu-img_check.txt.gz
Type: application/gzip
Size: 26453 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180213/1735088a/attachment.bin>
More information about the Users
mailing list