<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 2, 2017 at 12:04 PM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Thu, Feb 2, 2017 at 10:48 AM, Nir Soffer <span dir="ltr"><<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_-1117060285222874919gmail-">On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi<br>
<<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>> wrote:<br>
> On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi <<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>><br>
> wrote:<br>
>><br>
>><br>
>> OK. In the mean time I have applied your suggested config and restarted<br>
>> the 2 nodes.<br>
>> Let we test and see if I find any problems running also some I/O tests.<br>
>> Thanks in the mean time,<br>
>> Gianluca<br>
><br>
><br>
><br>
> Quick test without much success<br>
><br>
> Inside the guest I run this loop<br>
> while true<br>
> do<br>
> time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile<br></span></blockquote></span></div></div></div></blockquote><div><br></div><div>A single 'dd' rarely saturates a high performance storage.</div><div>There are better utilities to test ('fio' , 'vdbench' and 'ddpt' for example).</div><div>It's also testing a very theoretical scenario - very rarely you write zeros and very rarely you write so much sequential IO, and with a fixed block size. So it's almost 'hero numbers'.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_-1117060285222874919gmail-">
> sleep 5<br>
> done<br>
<br>
</span>I don't think this test is related to the issues you reported earlier.<br><span class="m_-1117060285222874919gmail-"><br></span></blockquote><div><br></div></span><div>I thought the same too, and all related comments you wrote.</div><div>I'm going to test the suggested modifications for chunks.</div><div>In general do you recommend thin provisioning at all on SAN storage?</div></div></div></div></blockquote><div><br></div><div>Depends on your SAN. On thin provisioned one (with potentially inline dedup and compression, such as XtremIO, Pure, Nimble and others) I don't see a great value in thin provisioning.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>I decided to switch to preallocated for further tests and confirm</div><div>So I created a snapshot and then a clone of the VM, changing allocation policy of the disk to preallocated.</div><div>So far so good.</div><div><br></div><div><div>Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.</div><div>Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by admin@internal-authz.</div><div>Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has been completed.</div><div>Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was initiated by admin@internal-authz.</div></div><div><br></div><div><div>so the throughput seems ok based on this storage type (the LUNs are on RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s, what expected</div></div></div></div></div></blockquote><div><br></div><div>What is your expectation? Is it FC, iSCSI? How many paths? What is the IO scheduler in the VM? Is it using virtio-blk or virtio-SCSI?</div><div>Y.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div><br></div><div>What I see in messages during the cloning phasefrom 10:24 to 10:40:</div><div><br></div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:24:14 ovmsrv05 journal: vdsm root WARNING File: /rhev/data-center/588237b8-<wbr>0031-02f6-035d-000000000136/<wbr>922b5269-ab56-4c4d-838f-<wbr>49d33427e2ab/images/9d1c977f-<wbr>540d-436a-9d93-b1cb0816af2a/<wbr>607dbf59-7d4d-4fc3-ae5f-<wbr>e8824bf82648 already removed</div><div>Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)</div><div>Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: devmap not registered, can't remove</div><div>Feb 2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)</div><div>Feb 2 10:24:17 ovmsrv05 kernel: blk_update_request: critical target error, dev dm-4, sector 44566529</div><div>Feb 2 10:24:17 ovmsrv05 kernel: dm-15: WRITE SAME failed. Manually zeroing.</div><div>Feb 2 10:40:07 ovmsrv05 kernel: scsi_verify_blk_ioctl: 16 callbacks suppressed</div><div>Feb 2 10:40:07 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div><div>Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)</div><div>Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: devmap not registered, can't remove</div><div>Feb 2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)</div><div>Feb 2 10:40:22 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!</div></div><span class=""><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_-1117060285222874919gmail-">
> After about 7 rounds I get this in messages of the host where the VM is<br>
> running:<br>
><br>
> Feb 1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:47 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:56 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
> Feb 1 23:31:57 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!<br>
<br>
</span>This is interesting, we have seen this messages before, but could never<br>
detect the flow causing them, are you sure you see this each time you extend<br>
your disk?<br>
<br>
If you can reproduce this, please file a bug.<br>
<div><div class="m_-1117060285222874919gmail-h5"><br></div></div></blockquote><div><br></div></span><div>Ok, see also above the registered message during the clone phase.</div><span class="HOEnZb"><font color="#888888"><div>Gianluca </div></font></span></div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>