On Thu, Feb 2, 2017 at 12:04 PM, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
On Thu, Feb 2, 2017 at 10:48 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi
> <gianluca.cecchi(a)gmail.com> wrote:
> > On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi
> > <gianluca.cecchi(a)gmail.com>
> > wrote:
> >>
> >>
> >> OK. In the mean time I have applied your suggested config and restarted
> >> the 2 nodes.
> >> Let we test and see if I find any problems running also some I/O tests.
> >> Thanks in the mean time,
> >> Gianluca
> >
> >
> >
> > Quick test without much success
> >
> > Inside the guest I run this loop
> > while true
> > do
> > time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile
> > sleep 5
> > done
>
> I don't think this test is related to the issues you reported earlier.
>
I thought the same too, and all related comments you wrote.
I'm going to test the suggested modifications for chunks.
In general do you recommend thin provisioning at all on SAN storage?
Only if your storage does no support thin provisioning, or you need snapshot
support.
If you don't need these feature, using raw will be much more reliable
and faster.
Even if you use raw, you can still perform live storage migration; we
create a snapshot
using qcow2 format, copy the base raw volume to another storage, and
finally delete
the snapshot on the destination storage.
In the future (ovirt 5?) we would like to use only smart storage thin
provisioning
and snapshot support.
I decided to switch to preallocated for further tests and confirm
So I created a snapshot and then a clone of the VM, changing allocation
policy of the disk to preallocated.
So far so good.
Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
admin@internal-authz.
Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65'
has
been completed.
Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65'
was
initiated by admin@internal-authz.
so the throughput seems ok based on this storage type (the LUNs are on RAID5
made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s, what
expected
What I see in messages during the cloning phasefrom 10:24 to 10:40:
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:24:14 ovmsrv05 journal: vdsm root WARNING File:
/rhev/data-center/588237b8-0031-02f6-035d-000000000136/922b5269-ab56-4c4d-838f-49d33427e2ab/images/9d1c977f-540d-436a-9d93-b1cb0816af2a/607dbf59-7d4d-4fc3-ae5f-e8824bf82648
already removed
Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: devmap not registered, can't
remove
Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
Feb 2 10:24:17 ovmsrv05 kernel: blk_update_request: critical target error,
dev dm-4, sector 44566529
Feb 2 10:24:17 ovmsrv05 kernel: dm-15: WRITE SAME failed. Manually zeroing.
Feb 2 10:40:07 ovmsrv05 kernel: scsi_verify_blk_ioctl: 16 callbacks
suppressed
Feb 2 10:40:07 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: devmap not registered, can't
remove
Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
Feb 2 10:40:22 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
Lets file a bug to investigate these "kernel: dd: sending ioctl
80306d02 to a partition!"
messages.
Please attach vdsm log on the machine emitting theses logs.
> > After about 7 rounds I get this in messages of the host
where the VM is
> > running:
> >
> > Feb 1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:47 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:56 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
> > Feb 1 23:31:57 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
> > partition!
>
> This is interesting, we have seen this messages before, but could never
> detect the flow causing them, are you sure you see this each time you
> extend
> your disk?
>
> If you can reproduce this, please file a bug.
>
Ok, see also above the registered message during the clone phase.
Gianluca