oVirt Performance (Horrific)

Hi everybody, my coworker and I have some decent hardware that would make great single servers, and then we through in a 10gb switch with 2x10gbps cards in 4 boxes. We have 2 excellent boxes - super micro board (disabled speed step) with 14 core intel i9's 7940x's, 128 GB ram (3200), 1tb m.2 samsung 870 evo's, 1tb ssd samsung, 1 8TB WD Gold, 1 6TB WD gold. Then we have 2 boxes (1 with 8 core i7-9700k, 1 with 6 core i7-8700) 128GB ram in one, 64GB ram in the other all 3000MHz with the same 1tb ssd,6tb wd gold, 8tb wd gold drives as the other boxes and 10gbps cards. Our problem is performance. We used the slower boxes for KVM(libvirt) and FreeNAS at first which was great performance wise. Then we bought the new super micro boxes and converted to oVirt + Gluster and did some basic write test using dd writing zero's to files from 1GB up to 50GB and were happy with the numbers writing directly to the gluster. But then we stuck a windows VM on it and turned it on...I'll stop there..because turning it on stopped any performance testing. This thing blew goat cheese. It was so slow the oVirt guest agent doesn't even start along with MS SQL server engine sometimes and other errors. So naturally, we removed the gluster from the equation. We took one of the 8TB WD Gold drives, made it a linux NFS share and gave it to oVirt to put VM's on as an NFS Domain. Just a single drive. Migrated the disk with the fresh windows 10 installation to it configured as VirtIO-SCSI, and booted the vm with 16GB ram, 8:1:1 cpu's. To our surprise it still blew. Just ran a winsat disk -drive c: for example purposes and the spice viewer repeatedly freezing, had the resource monitor open watching the 10,000ms disk response times with results...results were I rebooted because the results disappeared I didn't run it as administrator. And opening a command prompt is painful, the disk is still in use The task manager has no words on it. Disk is writing like 1MBps. command prompt finally showed up and looked blank with the cursor offset with no words anywhere. So the reboot took .. Well turning off took 2 minutes. Booting took 6 minutes 30 seconds ish. Logging in: 1m+ So 9-10 minutes to reboot and log back in a fresh windows install. Then 2 minutes to open a command prompt, task manager and resource monitor. During the write test disk i/o on the vm was less than 8, from the graph looks like 6MBps. Network traffic is like 20Mbps average, cpu is near zero, a couple spikes up to 30MBps on the disk. I ran this same thing on my disk and it finished in <1m. Ran it on the vm...still running after 30 minutes. I'll wait for the results to post them here. Ok It's been 30 minutes and it's still writing. I don't see the writes in the resource monitor, windows is doing a bunch of random app updates or something with candy crush on a fresh install and, ok so I hit enter a bunch of times on the prompt and it moved down to a flush-seq...and now it shows up in the resource monitor doing something again...I just ran this on my pc and it finished in less than a minute... whatever it's doing it's almost running at 1MB/s I think something went wrong because it only shows like 2 minutes passing at any test and then a total of 37 minutes. And at no time did the windows resource graphs or any of the oVirt node system graphs show more than like 6MB/s and definitely not 50 or 1GBps... flat out lies here. C:\Windows\system32>winsat disk -drive c Windows System Assessment Tool
Running: Feature Enumeration '' Run Time 00:00:00.00 Running: Storage Assessment '-drive c -ran -read' Run Time 00:00:12.95 Running: Storage Assessment '-drive c -seq -read' Run Time 00:00:20.59 Running: Storage Assessment '-drive c -seq -write' Run Time 00:02:04.56 Run Time 00:02:04.56 Running: Storage Assessment '-drive c -flush -seq' Run Time 00:01:02.75 Running: Storage Assessment '-drive c -flush -ran' Run Time 00:01:50.20 Dshow Video Encode Time 0.00000 s Dshow Video Decode Time 0.00000 s Media Foundation Decode Time 0.00000 s Disk Random 16.0 Read 5.25 MB/s 5.1 Disk Sequential 64.0 Read 1220.56 MB/s 8.6 Disk Sequential 64.0 Write 53.61 MB/s 5.5 Average Read Time with Sequential Writes 22.994 ms 1.9 Latency: 95th Percentile 85.867 ms 1.9 Latency: Maximum 325.666 ms 6.5 Average Read Time with Random Writes 29.548 ms 1.9 Total Run Time 00:37:35.55
I even ran it again and it had the exact same results. So I'll try copying a file from a 1gbps network location with an ssd to this pc. It's a 4GB CentOS7 ISO to the desktop. It said 20MB/s then up to 90MB/s...then it dropped after doing a couple gigs. 2.25 gigs to go and it's going at 2.7MB/s with some fluctuations up to 5MB. So to drive this home at the same time... I copied the file off the same server to another server with ssd disks and it ran at 100MB/s which is what I'd expect over a 1gbps network. All this said, we do have an ssd gluster 2+1 arbiter (which seemed fastest when we tested different variations) on the 1TB ssd's. I was able to do reads from the array inside a VM at 550MB/s which is expected for an ssd. We did dd writing zero's and got about 550MB/s also from the ovirt node. But inside a VM the best we get is around ~10MB/s writing. Basically done the same testing using windows server 2016, boot terrible, opening applications terrible. But with sql server running off the ssd gluster I can read at 550MB/s but writing is horrific somewhere around 2-10MB/s. Latency between the nodes with ping is 100us ish. The hardware should be able to do 200MB/s HDD's, 550MB/s SSD's but it doesn't. And it's evident in every writing scenario inside a vm. Migrating VM's is also at this speed. Gluster healing seems to run faster, we've seen it conusme 7-9 Gbps. So I feel this is an oVirt issue and not gluster. Especially since all the tests above are the same when using an NFS mount on the box running the VM in oVirt. Please guide me. I can post pictures and such if needed, logs whatever. Just ask.

Can you explain your expectations for where you are trying to get? Ovirt uses KVM under the hood and performs very well for many organizations. I am sure its just a configuration thing. On Thu, Feb 28, 2019 at 12:48 PM <drew.rash@gmail.com> wrote:
Hi everybody, my coworker and I have some decent hardware that would make great single servers, and then we through in a 10gb switch with 2x10gbps cards in 4 boxes.
We have 2 excellent boxes - super micro board (disabled speed step) with 14 core intel i9's 7940x's, 128 GB ram (3200), 1tb m.2 samsung 870 evo's, 1tb ssd samsung, 1 8TB WD Gold, 1 6TB WD gold. Then we have 2 boxes (1 with 8 core i7-9700k, 1 with 6 core i7-8700) 128GB ram in one, 64GB ram in the other all 3000MHz with the same 1tb ssd,6tb wd gold, 8tb wd gold drives as the other boxes and 10gbps cards.
Our problem is performance. We used the slower boxes for KVM(libvirt) and FreeNAS at first which was great performance wise. Then we bought the new super micro boxes and converted to oVirt + Gluster and did some basic write test using dd writing zero's to files from 1GB up to 50GB and were happy with the numbers writing directly to the gluster. But then we stuck a windows VM on it and turned it on...I'll stop there..because turning it on stopped any performance testing. This thing blew goat cheese. It was so slow the oVirt guest agent doesn't even start along with MS SQL server engine sometimes and other errors.
So naturally, we removed the gluster from the equation. We took one of the 8TB WD Gold drives, made it a linux NFS share and gave it to oVirt to put VM's on as an NFS Domain. Just a single drive. Migrated the disk with the fresh windows 10 installation to it configured as VirtIO-SCSI, and booted the vm with 16GB ram, 8:1:1 cpu's. To our surprise it still blew. Just ran a winsat disk -drive c: for example purposes and the spice viewer repeatedly freezing, had the resource monitor open watching the 10,000ms disk response times with results...results were I rebooted because the results disappeared I didn't run it as administrator. And opening a command prompt is painful, the disk is still in use The task manager has no words on it. Disk is writing like 1MBps. command prompt finally showed up and looked blank with the cursor offset with no words anywhere. So the reboot took .. Well turning off took 2 minutes. Booting took 6 minutes 30 seconds ish. Logging in: 1m+
So 9-10 minutes to reboot and log back in a fresh windows install. Then 2 minutes to open a command prompt, task manager and resource monitor. During the write test disk i/o on the vm was less than 8, from the graph looks like 6MBps. Network traffic is like 20Mbps average, cpu is near zero, a couple spikes up to 30MBps on the disk. I ran this same thing on my disk and it finished in <1m. Ran it on the vm...still running after 30 minutes. I'll wait for the results to post them here. Ok It's been 30 minutes and it's still writing. I don't see the writes in the resource monitor, windows is doing a bunch of random app updates or something with candy crush on a fresh install and, ok so I hit enter a bunch of times on the prompt and it moved down to a flush-seq...and now it shows up in the resource monitor doing something again...I just ran this on my pc and it finished in less than a minute... whatever it's doing it's almost running at 1MB/s
I think something went wrong because it only shows like 2 minutes passing at any test and then a total of 37 minutes. And at no time did the windows resource graphs or any of the oVirt node system graphs show more than like 6MB/s and definitely not 50 or 1GBps... flat out lies here. C:\Windows\system32>winsat disk -drive c Windows System Assessment Tool
Running: Feature Enumeration '' Run Time 00:00:00.00 Running: Storage Assessment '-drive c -ran -read' Run Time 00:00:12.95 Running: Storage Assessment '-drive c -seq -read' Run Time 00:00:20.59 Running: Storage Assessment '-drive c -seq -write' Run Time 00:02:04.56 Run Time 00:02:04.56 Running: Storage Assessment '-drive c -flush -seq' Run Time 00:01:02.75 Running: Storage Assessment '-drive c -flush -ran' Run Time 00:01:50.20 Dshow Video Encode Time 0.00000 s Dshow Video Decode Time 0.00000 s Media Foundation Decode Time 0.00000 s Disk Random 16.0 Read 5.25 MB/s 5.1 Disk Sequential 64.0 Read 1220.56 MB/s 8.6 Disk Sequential 64.0 Write 53.61 MB/s 5.5 Average Read Time with Sequential Writes 22.994 ms 1.9 Latency: 95th Percentile 85.867 ms 1.9 Latency: Maximum 325.666 ms 6.5 Average Read Time with Random Writes 29.548 ms 1.9 Total Run Time 00:37:35.55
I even ran it again and it had the exact same results. So I'll try copying a file from a 1gbps network location with an ssd to this pc. It's a 4GB CentOS7 ISO to the desktop. It said 20MB/s then up to 90MB/s...then it dropped after doing a couple gigs. 2.25 gigs to go and it's going at 2.7MB/s with some fluctuations up to 5MB. So to drive this home at the same time... I copied the file off the same server to another server with ssd disks and it ran at 100MB/s which is what I'd expect over a 1gbps network.
All this said, we do have an ssd gluster 2+1 arbiter (which seemed fastest when we tested different variations) on the 1TB ssd's. I was able to do reads from the array inside a VM at 550MB/s which is expected for an ssd. We did dd writing zero's and got about 550MB/s also from the ovirt node. But inside a VM the best we get is around ~10MB/s writing.
Basically done the same testing using windows server 2016, boot terrible, opening applications terrible. But with sql server running off the ssd gluster I can read at 550MB/s but writing is horrific somewhere around 2-10MB/s.
Latency between the nodes with ping is 100us ish. The hardware should be able to do 200MB/s HDD's, 550MB/s SSD's but it doesn't. And it's evident in every writing scenario inside a vm. Migrating VM's is also at this speed. Gluster healing seems to run faster, we've seen it conusme 7-9 Gbps. So I feel this is an oVirt issue and not gluster. Especially since all the tests above are the same when using an NFS mount on the box running the VM in oVirt.
Please guide me. I can post pictures and such if needed, logs whatever. Just ask. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/P3AYHPYIY45WLE...

I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM. But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives. I expect windows services to not timeout due to slow hard drives while booting. I expect a windows vm to boot in less than 6 minutes. I expect to open chrome in less than 1 minute. :) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing. I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing. But we can't even run 1 windows vm at something anyone would remotely call "performs very well". Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try?

Check volumes in Ovirt admin and make sure the optimize volume for cm storage is selected I have a three node ovirt hci with ssd gluster backed storage and 10Gb storage network and I write at around 50-60 megabytes per second from within vms. Before I used the optimize for vm storage it was about 10x less. Iirc the optimize setting it suppose to get set by the installler by default but it wasn’t set in my case when I used cockpit to deploy (this was a little while ago on ovirt 4.2). Worth checking anyway. On Thu, Feb 28, 2019 at 4:51 PM Drew R <drew.rash@gmail.com> wrote:
I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM.
But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives.
I expect windows services to not timeout due to slow hard drives while booting. I expect a windows vm to boot in less than 6 minutes. I expect to open chrome in less than 1 minute.
:) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing.
I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing.
But we can't even run 1 windows vm at something anyone would remotely call "performs very well".
Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLWJHKXBR3A4RE...

Also one more thing, did you make sure to setup the 10Gb gluster network in ovirt and set migration and vm traffic to use the gluster network? On Thu, Feb 28, 2019 at 6:11 PM Jayme <jaymef@gmail.com> wrote:
Check volumes in Ovirt admin and make sure the optimize volume for cm storage is selected
I have a three node ovirt hci with ssd gluster backed storage and 10Gb storage network and I write at around 50-60 megabytes per second from within vms. Before I used the optimize for vm storage it was about 10x less. Iirc the optimize setting it suppose to get set by the installler by default but it wasn’t set in my case when I used cockpit to deploy (this was a little while ago on ovirt 4.2). Worth checking anyway.
On Thu, Feb 28, 2019 at 4:51 PM Drew R <drew.rash@gmail.com> wrote:
I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM.
But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives.
I expect windows services to not timeout due to slow hard drives while booting. I expect a windows vm to boot in less than 6 minutes. I expect to open chrome in less than 1 minute.
:) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing.
I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing.
But we can't even run 1 windows vm at something anyone would remotely call "performs very well".
Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLWJHKXBR3A4RE...

Yes, it's all on the 10Gb. The only way out of it is a single 1Gb going to the rest of the servers/clients and internet. On Thu, Feb 28, 2019 at 4:15 PM Jayme <jaymef@gmail.com> wrote:
Also one more thing, did you make sure to setup the 10Gb gluster network in ovirt and set migration and vm traffic to use the gluster network?
On Thu, Feb 28, 2019 at 6:11 PM Jayme <jaymef@gmail.com> wrote:
Check volumes in Ovirt admin and make sure the optimize volume for cm storage is selected
I have a three node ovirt hci with ssd gluster backed storage and 10Gb storage network and I write at around 50-60 megabytes per second from within vms. Before I used the optimize for vm storage it was about 10x less. Iirc the optimize setting it suppose to get set by the installler by default but it wasn’t set in my case when I used cockpit to deploy (this was a little while ago on ovirt 4.2). Worth checking anyway.
On Thu, Feb 28, 2019 at 4:51 PM Drew R <drew.rash@gmail.com> wrote:
I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM.
But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives.
I expect windows services to not timeout due to slow hard drives while booting. I expect a windows vm to boot in less than 6 minutes. I expect to open chrome in less than 1 minute.
:) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing.
I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing.
But we can't even run 1 windows vm at something anyone would remotely call "performs very well".
Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLWJHKXBR3A4RE...

My coworker says he did the optimize thing a while back. I'll see if he wants to click it again tomorrow sometime. If it has a positive effect I'll be sure to post it. On Thu, Feb 28, 2019 at 4:12 PM Jayme <jaymef@gmail.com> wrote:
Check volumes in Ovirt admin and make sure the optimize volume for cm storage is selected
I have a three node ovirt hci with ssd gluster backed storage and 10Gb storage network and I write at around 50-60 megabytes per second from within vms. Before I used the optimize for vm storage it was about 10x less. Iirc the optimize setting it suppose to get set by the installler by default but it wasn’t set in my case when I used cockpit to deploy (this was a little while ago on ovirt 4.2). Worth checking anyway.
On Thu, Feb 28, 2019 at 4:51 PM Drew R <drew.rash@gmail.com> wrote:
I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM.
But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives.
I expect windows services to not timeout due to slow hard drives while booting. I expect a windows vm to boot in less than 6 minutes. I expect to open chrome in less than 1 minute.
:) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing.
I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing.
But we can't even run 1 windows vm at something anyone would remotely call "performs very well".
Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLWJHKXBR3A4RE...

I set my account name to Drew R earlier and it seems to have altered the course of this thread. But anyway. This guy Strahil replied to me (not showing below) and said to try this: Strahil: "Can you try setting the viodiskcache custom property to write back ? Best Regards, Strahil" I cloned the windows 10 vm, set one to use the 8TB WDGold disk and one to use a 6TB WDGold 2+1arb mirror gluster. The 8TB single disk VM ran full speed: 115MBps. When it took 19 minutes prior. The gluster VM is still going as I type this...probably still on track for 19 minutes on a 6.8GB file (~5MBps). 115 is basically maxing the 1GBps ethernet to the remote server I'm reading from for the file copy) And this is taking place INSIDE the VM managed by oVirt. Outside the VM I can achieve expected speeds for the hardware in any direction. I'd test something within the 10Gbps network but it's only oVirt manages devices and they can't even read off each other faster than 5MBps. So that was a great example of how a setting can have effect. So with a VirtIO-SCSI + viodiskcache=write back on a single non-gluster disk I get basically max speed. Otherwise - goat cheese. So how do I fix the gluster performance - through oVirt....? (that gluster copy just finished btw - 14 minutes - so maybe a slight improvement)

Also I just noticed but the windows resource monitor response times show zero (0) across the board now with the viodiskcache = writeback setting. Not sure if that's accurate. But a notable difference for the notable speed boost.

Saw some people asking for profile info. So I had started a migration from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and it's been running a while for a 100GB file thin provisioned with like 28GB actually used. Here is the profile info. I started the profiler like 5 minutes ago. The migration had been running for like 30minutes: gluster volume profile gv2 info Brick: 10.30.30.122:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1189 8 12 No. of Writes: 4 3245 883 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 10 20 2 No. of Writes: 1087 312228 124080 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 1 52 No. of Writes: 5188 3617 5532 Block Size: 131072b+ No. of Reads: 70191 No. of Writes: 634192 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 202 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1297 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 41.58 us 18.00 us 79.00 us 24 OPENDIR 0.00 166.70 us 15.00 us 775.00 us 27 READDIRP 0.01 135.29 us 12.00 us 11695.00 us 221 STATFS 0.01 176.54 us 22.00 us 22944.00 us 364 FSTAT 0.02 626.21 us 13.00 us 17308.00 us 168 STAT 0.09 834.84 us 9.00 us 34337.00 us 607 LOOKUP 0.73 146.18 us 6.00 us 52255.00 us 29329 FINODELK 1.03 298.20 us 42.00 us 43711.00 us 20204 FXATTROP 15.38 8903.40 us 213.00 us 213832.00 us 10102 WRITE 39.14 26796.37 us 222.00 us 122696.00 us 8538 READ 43.59 39536.79 us 259.00 us 183630.00 us 6446 FSYNC Duration: 15078 seconds Data Read: 9207377205 bytes Data Written: 86214017762 bytes Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 17 0 0 No. of Writes: 0 43 7 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 7 0 No. of Writes: 16 1881 1010 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 6 No. of Writes: 305 586 2359 Block Size: 131072b+ No. of Reads: 7162 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 40.05 us 18.00 us 79.00 us 20 OPENDIR 0.00 180.33 us 16.00 us 775.00 us 21 READDIRP 0.01 181.77 us 12.00 us 11695.00 us 149 STATFS 0.01 511.23 us 15.00 us 16954.00 us 111 STAT 0.01 210.22 us 22.00 us 22944.00 us 291 FSTAT 0.09 979.73 us 9.00 us 23539.00 us 404 LOOKUP 0.81 179.58 us 6.00 us 52255.00 us 19786 FINODELK 1.07 342.71 us 42.00 us 43711.00 us 13633 FXATTROP 14.18 9087.30 us 213.00 us 213832.00 us 6817 WRITE 40.00 40227.38 us 259.00 us 157934.00 us 4343 FSYNC 43.81 26601.57 us 222.00 us 118756.00 us 7192 READ Duration: 177 seconds Data Read: 939534945 bytes Data Written: 363763200 bytes Brick: 10.30.30.121:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 8b+ 256b+ 512b+ No. of Reads: 0 142626 329092 No. of Writes: 12 98 123230 Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 21175 18223 3345043 No. of Writes: 58646 69147 6092143 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 1090937 216037 748595 No. of Writes: 2030403 243915 240942 Block Size: 65536b+ 131072b+ No. of Reads: 1524667 4828662 No. of Writes: 605125 1889892 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 133 FORGET 0.00 0.00 us 0.00 us 0.00 us 4821 RELEASE 0.00 0.00 us 0.00 us 0.00 us 42742 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 35.92 us 1.00 us 76.00 us 24 OPENDIR 0.00 325.54 us 22.00 us 1117.00 us 13 READDIRP 0.00 482.13 us 19.00 us 12473.00 us 46 STAT 0.00 217.65 us 15.00 us 32841.00 us 221 STATFS 0.01 497.52 us 11.00 us 29179.00 us 539 LOOKUP 0.02 685.44 us 16.00 us 61131.00 us 615 FSTAT 0.04 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.45 451.95 us 51.00 us 65375.00 us 20206 FXATTROP 6.90 13906.03 us 225.00 us 210298.00 us 10103 WRITE 18.47 58375.96 us 389.00 us 218211.00 us 6446 FSYNC 27.45 27279.84 us 184.00 us 241444.00 us 20503 READ 46.66 32462.82 us 7.00 us 2750519.00 us 29283 FINODELK Duration: 475152 seconds Data Read: 799630169437 bytes Data Written: 362708560796 bytes Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 53 0 0 No. of Writes: 0 43 7 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 891 616 No. of Writes: 16 1881 1010 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 288 750 52 No. of Writes: 304 586 2359 Block Size: 131072b+ No. of Reads: 9445 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 34.50 us 1.00 us 76.00 us 20 OPENDIR 0.00 55.16 us 19.00 us 128.00 us 25 STAT 0.00 363.18 us 25.00 us 1117.00 us 11 READDIRP 0.00 71.61 us 15.00 us 4323.00 us 149 STATFS 0.01 251.40 us 11.00 us 15344.00 us 336 LOOKUP 0.02 594.11 us 16.00 us 61131.00 us 450 FSTAT 0.07 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 20.03 56497.17 us 394.00 us 218211.00 us 4340 FSYNC 25.91 26229.86 us 184.00 us 241444.00 us 12095 READ 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK Duration: 177 seconds Data Read: 1298631621 bytes Data Written: 363744256 bytes Brick: 10.30.30.123:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 18126558 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 383 FORGET 0.00 0.00 us 0.00 us 0.00 us 9602 RELEASE 0.00 0.00 us 0.00 us 0.00 us 70644 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.00 56.38 us 1.00 us 111.00 us 24 OPENDIR 0.02 66.23 us 28.00 us 146.00 us 113 FSTAT 0.17 143.88 us 13.00 us 444.00 us 539 LOOKUP 1.17 51.64 us 10.00 us 166.00 us 10103 WRITE 2.89 44.06 us 7.00 us 2177.00 us 29349 FINODELK 5.55 122.83 us 54.00 us 9281.00 us 20206 FXATTROP 90.19 6249.64 us 237.00 us 59996.00 us 6448 FSYNC Duration: 772792 seconds Data Read: 0 bytes Data Written: 18126558 bytes Interval 2 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 6816 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 58.10 us 1.00 us 111.00 us 20 OPENDIR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.03 66.23 us 28.00 us 146.00 us 113 FSTAT 0.16 145.84 us 13.00 us 346.00 us 336 LOOKUP 1.18 51.98 us 11.00 us 157.00 us 6816 WRITE 2.91 44.05 us 7.00 us 2177.00 us 19798 FINODELK 5.59 122.70 us 54.00 us 9281.00 us 13632 FXATTROP 90.11 6210.61 us 238.00 us 54711.00 us 4342 FSYNC Duration: 177 seconds Data Read: 0 bytes Data Written: 6816 bytes

Hi, Could you share the following pieces of information to begin with - 1. output of `gluster volume info $AFFECTED_VOLUME_NAME` 2. glusterfs version you're running -Krutika On Sat, Mar 2, 2019 at 3:38 AM Drew R <drew.rash@gmail.com> wrote:
Saw some people asking for profile info. So I had started a migration from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and it's been running a while for a 100GB file thin provisioned with like 28GB actually used. Here is the profile info. I started the profiler like 5 minutes ago. The migration had been running for like 30minutes:
gluster volume profile gv2 info Brick: 10.30.30.122:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1189 8 12 No. of Writes: 4 3245 883
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 10 20 2 No. of Writes: 1087 312228 124080
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 1 52 No. of Writes: 5188 3617 5532
Block Size: 131072b+ No. of Reads: 70191 No. of Writes: 634192 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 202 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1297 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 41.58 us 18.00 us 79.00 us 24 OPENDIR 0.00 166.70 us 15.00 us 775.00 us 27 READDIRP 0.01 135.29 us 12.00 us 11695.00 us 221 STATFS 0.01 176.54 us 22.00 us 22944.00 us 364 FSTAT 0.02 626.21 us 13.00 us 17308.00 us 168 STAT 0.09 834.84 us 9.00 us 34337.00 us 607 LOOKUP 0.73 146.18 us 6.00 us 52255.00 us 29329 FINODELK 1.03 298.20 us 42.00 us 43711.00 us 20204 FXATTROP 15.38 8903.40 us 213.00 us 213832.00 us 10102 WRITE 39.14 26796.37 us 222.00 us 122696.00 us 8538 READ 43.59 39536.79 us 259.00 us 183630.00 us 6446 FSYNC
Duration: 15078 seconds Data Read: 9207377205 bytes Data Written: 86214017762 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 17 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 7 0 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 6 No. of Writes: 305 586 2359
Block Size: 131072b+ No. of Reads: 7162 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 40.05 us 18.00 us 79.00 us 20 OPENDIR 0.00 180.33 us 16.00 us 775.00 us 21 READDIRP 0.01 181.77 us 12.00 us 11695.00 us 149 STATFS 0.01 511.23 us 15.00 us 16954.00 us 111 STAT 0.01 210.22 us 22.00 us 22944.00 us 291 FSTAT 0.09 979.73 us 9.00 us 23539.00 us 404 LOOKUP 0.81 179.58 us 6.00 us 52255.00 us 19786 FINODELK 1.07 342.71 us 42.00 us 43711.00 us 13633 FXATTROP 14.18 9087.30 us 213.00 us 213832.00 us 6817 WRITE 40.00 40227.38 us 259.00 us 157934.00 us 4343 FSYNC 43.81 26601.57 us 222.00 us 118756.00 us 7192 READ
Duration: 177 seconds Data Read: 939534945 bytes Data Written: 363763200 bytes
Brick: 10.30.30.121:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 8b+ 256b+ 512b+ No. of Reads: 0 142626 329092 No. of Writes: 12 98 123230
Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 21175 18223 3345043 No. of Writes: 58646 69147 6092143
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 1090937 216037 748595 No. of Writes: 2030403 243915 240942
Block Size: 65536b+ 131072b+ No. of Reads: 1524667 4828662 No. of Writes: 605125 1889892 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 133 FORGET 0.00 0.00 us 0.00 us 0.00 us 4821 RELEASE 0.00 0.00 us 0.00 us 0.00 us 42742 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 35.92 us 1.00 us 76.00 us 24 OPENDIR 0.00 325.54 us 22.00 us 1117.00 us 13 READDIRP 0.00 482.13 us 19.00 us 12473.00 us 46 STAT 0.00 217.65 us 15.00 us 32841.00 us 221 STATFS 0.01 497.52 us 11.00 us 29179.00 us 539 LOOKUP 0.02 685.44 us 16.00 us 61131.00 us 615 FSTAT 0.04 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.45 451.95 us 51.00 us 65375.00 us 20206 FXATTROP 6.90 13906.03 us 225.00 us 210298.00 us 10103 WRITE 18.47 58375.96 us 389.00 us 218211.00 us 6446 FSYNC 27.45 27279.84 us 184.00 us 241444.00 us 20503 READ 46.66 32462.82 us 7.00 us 2750519.00 us 29283 FINODELK
Duration: 475152 seconds Data Read: 799630169437 bytes Data Written: 362708560796 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 53 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 891 616 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 288 750 52 No. of Writes: 304 586 2359
Block Size: 131072b+ No. of Reads: 9445 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 34.50 us 1.00 us 76.00 us 20 OPENDIR 0.00 55.16 us 19.00 us 128.00 us 25 STAT 0.00 363.18 us 25.00 us 1117.00 us 11 READDIRP 0.00 71.61 us 15.00 us 4323.00 us 149 STATFS 0.01 251.40 us 11.00 us 15344.00 us 336 LOOKUP 0.02 594.11 us 16.00 us 61131.00 us 450 FSTAT 0.07 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 20.03 56497.17 us 394.00 us 218211.00 us 4340 FSYNC 25.91 26229.86 us 184.00 us 241444.00 us 12095 READ 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
Duration: 177 seconds Data Read: 1298631621 bytes Data Written: 363744256 bytes
Brick: 10.30.30.123:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 18126558 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 383 FORGET 0.00 0.00 us 0.00 us 0.00 us 9602 RELEASE 0.00 0.00 us 0.00 us 0.00 us 70644 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.00 56.38 us 1.00 us 111.00 us 24 OPENDIR 0.02 66.23 us 28.00 us 146.00 us 113 FSTAT 0.17 143.88 us 13.00 us 444.00 us 539 LOOKUP 1.17 51.64 us 10.00 us 166.00 us 10103 WRITE 2.89 44.06 us 7.00 us 2177.00 us 29349 FINODELK 5.55 122.83 us 54.00 us 9281.00 us 20206 FXATTROP 90.19 6249.64 us 237.00 us 59996.00 us 6448 FSYNC
Duration: 772792 seconds Data Read: 0 bytes Data Written: 18126558 bytes
Interval 2 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 6816 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 58.10 us 1.00 us 111.00 us 20 OPENDIR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.03 66.23 us 28.00 us 146.00 us 113 FSTAT 0.16 145.84 us 13.00 us 346.00 us 336 LOOKUP 1.18 51.98 us 11.00 us 157.00 us 6816 WRITE 2.91 44.05 us 7.00 us 2177.00 us 19798 FINODELK 5.59 122.70 us 54.00 us 9281.00 us 13632 FXATTROP 90.11 6210.61 us 238.00 us 54711.00 us 4342 FSYNC
Duration: 177 seconds Data Read: 0 bytes Data Written: 6816 bytes _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHAUIMGRCVEDCL...

So from the profile, it appears the XATTROPs and FINODELKs are way higher than the number of WRITEs: <excerpt> ... ... %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK ... ... </excerpt> We'd noticed something similar in our internal tests and found inefficiencies in gluster's eager-lock implementation. This was fixed at https://review.gluster.org/c/glusterfs/+/19503. I need the two things I asked for in the prev mail to confirm if you're hitting the same issue. -Krutika On Thu, Mar 7, 2019 at 12:24 PM Krutika Dhananjay <kdhananj@redhat.com> wrote:
Hi,
Could you share the following pieces of information to begin with -
1. output of `gluster volume info $AFFECTED_VOLUME_NAME` 2. glusterfs version you're running
-Krutika
On Sat, Mar 2, 2019 at 3:38 AM Drew R <drew.rash@gmail.com> wrote:
Saw some people asking for profile info. So I had started a migration from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and it's been running a while for a 100GB file thin provisioned with like 28GB actually used. Here is the profile info. I started the profiler like 5 minutes ago. The migration had been running for like 30minutes:
gluster volume profile gv2 info Brick: 10.30.30.122:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1189 8 12 No. of Writes: 4 3245 883
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 10 20 2 No. of Writes: 1087 312228 124080
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 1 52 No. of Writes: 5188 3617 5532
Block Size: 131072b+ No. of Reads: 70191 No. of Writes: 634192 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 202 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1297 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 41.58 us 18.00 us 79.00 us 24 OPENDIR 0.00 166.70 us 15.00 us 775.00 us 27 READDIRP 0.01 135.29 us 12.00 us 11695.00 us 221 STATFS 0.01 176.54 us 22.00 us 22944.00 us 364 FSTAT 0.02 626.21 us 13.00 us 17308.00 us 168 STAT 0.09 834.84 us 9.00 us 34337.00 us 607 LOOKUP 0.73 146.18 us 6.00 us 52255.00 us 29329 FINODELK 1.03 298.20 us 42.00 us 43711.00 us 20204 FXATTROP 15.38 8903.40 us 213.00 us 213832.00 us 10102 WRITE 39.14 26796.37 us 222.00 us 122696.00 us 8538 READ 43.59 39536.79 us 259.00 us 183630.00 us 6446 FSYNC
Duration: 15078 seconds Data Read: 9207377205 bytes Data Written: 86214017762 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 17 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 7 0 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 6 No. of Writes: 305 586 2359
Block Size: 131072b+ No. of Reads: 7162 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 40.05 us 18.00 us 79.00 us 20 OPENDIR 0.00 180.33 us 16.00 us 775.00 us 21 READDIRP 0.01 181.77 us 12.00 us 11695.00 us 149 STATFS 0.01 511.23 us 15.00 us 16954.00 us 111 STAT 0.01 210.22 us 22.00 us 22944.00 us 291 FSTAT 0.09 979.73 us 9.00 us 23539.00 us 404 LOOKUP 0.81 179.58 us 6.00 us 52255.00 us 19786 FINODELK 1.07 342.71 us 42.00 us 43711.00 us 13633 FXATTROP 14.18 9087.30 us 213.00 us 213832.00 us 6817 WRITE 40.00 40227.38 us 259.00 us 157934.00 us 4343 FSYNC 43.81 26601.57 us 222.00 us 118756.00 us 7192 READ
Duration: 177 seconds Data Read: 939534945 bytes Data Written: 363763200 bytes
Brick: 10.30.30.121:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 8b+ 256b+ 512b+ No. of Reads: 0 142626 329092 No. of Writes: 12 98 123230
Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 21175 18223 3345043 No. of Writes: 58646 69147 6092143
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 1090937 216037 748595 No. of Writes: 2030403 243915 240942
Block Size: 65536b+ 131072b+ No. of Reads: 1524667 4828662 No. of Writes: 605125 1889892 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 133 FORGET 0.00 0.00 us 0.00 us 0.00 us 4821 RELEASE 0.00 0.00 us 0.00 us 0.00 us 42742 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 35.92 us 1.00 us 76.00 us 24 OPENDIR 0.00 325.54 us 22.00 us 1117.00 us 13 READDIRP 0.00 482.13 us 19.00 us 12473.00 us 46 STAT 0.00 217.65 us 15.00 us 32841.00 us 221 STATFS 0.01 497.52 us 11.00 us 29179.00 us 539 LOOKUP 0.02 685.44 us 16.00 us 61131.00 us 615 FSTAT 0.04 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.45 451.95 us 51.00 us 65375.00 us 20206 FXATTROP 6.90 13906.03 us 225.00 us 210298.00 us 10103 WRITE 18.47 58375.96 us 389.00 us 218211.00 us 6446 FSYNC 27.45 27279.84 us 184.00 us 241444.00 us 20503 READ 46.66 32462.82 us 7.00 us 2750519.00 us 29283 FINODELK
Duration: 475152 seconds Data Read: 799630169437 bytes Data Written: 362708560796 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 53 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 891 616 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 288 750 52 No. of Writes: 304 586 2359
Block Size: 131072b+ No. of Reads: 9445 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 34.50 us 1.00 us 76.00 us 20 OPENDIR 0.00 55.16 us 19.00 us 128.00 us 25 STAT 0.00 363.18 us 25.00 us 1117.00 us 11 READDIRP 0.00 71.61 us 15.00 us 4323.00 us 149 STATFS 0.01 251.40 us 11.00 us 15344.00 us 336 LOOKUP 0.02 594.11 us 16.00 us 61131.00 us 450 FSTAT 0.07 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 20.03 56497.17 us 394.00 us 218211.00 us 4340 FSYNC 25.91 26229.86 us 184.00 us 241444.00 us 12095 READ 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
Duration: 177 seconds Data Read: 1298631621 bytes Data Written: 363744256 bytes
Brick: 10.30.30.123:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 18126558 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 383 FORGET 0.00 0.00 us 0.00 us 0.00 us 9602 RELEASE 0.00 0.00 us 0.00 us 0.00 us 70644 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.00 56.38 us 1.00 us 111.00 us 24 OPENDIR 0.02 66.23 us 28.00 us 146.00 us 113 FSTAT 0.17 143.88 us 13.00 us 444.00 us 539 LOOKUP 1.17 51.64 us 10.00 us 166.00 us 10103 WRITE 2.89 44.06 us 7.00 us 2177.00 us 29349 FINODELK 5.55 122.83 us 54.00 us 9281.00 us 20206 FXATTROP 90.19 6249.64 us 237.00 us 59996.00 us 6448 FSYNC
Duration: 772792 seconds Data Read: 0 bytes Data Written: 18126558 bytes
Interval 2 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 6816 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 58.10 us 1.00 us 111.00 us 20 OPENDIR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.03 66.23 us 28.00 us 146.00 us 113 FSTAT 0.16 145.84 us 13.00 us 346.00 us 336 LOOKUP 1.18 51.98 us 11.00 us 157.00 us 6816 WRITE 2.91 44.05 us 7.00 us 2177.00 us 19798 FINODELK 5.59 122.70 us 54.00 us 9281.00 us 13632 FXATTROP 90.11 6210.61 us 238.00 us 54711.00 us 4342 FSYNC
Duration: 177 seconds Data Read: 0 bytes Data Written: 6816 bytes _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHAUIMGRCVEDCL...

Here is the output for our ssd gluster which exhibits the same issue as the hdd glusters. However, I can replicate the issue on an 8TB WD Gold disk NFS mounted as well ( removed the gluster part ) Which is the reason I'm on the oVirt site. I can start a file copy that writes at max speed, then after a gb or 2 it drops down to 3-10 MBps maxing at 13.3 ish overall. Testing outside of oVirt using dd doesn't have the same behavior. Outside ovirt (directly on the ovirtnode to the gluster or 8tb nfs mount results is max drive speeds consistently for large file copies) I enabled writeback (as someone suggested) on the virtio-scsi windows disk and one of our windows 10 installs speed up. Still suffers from sustained write issue which causes the whole box to cripple. Opening chrome for example cripples the box or sql server management studio also. Volume Name: gv1 Type: Replicate Volume ID: 7340a436-d971-4d69-84f9-12a23cd76ec8 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.30.30.121:/gluster_bricks/gv1/brick Brick2: 10.30.30.122:/gluster_bricks/gv1/brick Brick3: 10.30.30.123:/gluster_bricks/gv1/brick (arbiter) Options Reconfigured: network.ping-timeout: 30 performance.strict-o-direct: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off features.shard: off cluster.granular-entry-heal: enable transport.address-family: inet nfs.disable: on performance.client-io-threads: off On Thu, Mar 7, 2019 at 1:00 AM Krutika Dhananjay <kdhananj@redhat.com> wrote:
So from the profile, it appears the XATTROPs and FINODELKs are way higher than the number of WRITEs:
<excerpt> ... ... %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
... ... </excerpt>
We'd noticed something similar in our internal tests and found inefficiencies in gluster's eager-lock implementation. This was fixed at https://review.gluster.org/c/glusterfs/+/19503. I need the two things I asked for in the prev mail to confirm if you're hitting the same issue.
-Krutika
On Thu, Mar 7, 2019 at 12:24 PM Krutika Dhananjay <kdhananj@redhat.com> wrote:
Hi,
Could you share the following pieces of information to begin with -
1. output of `gluster volume info $AFFECTED_VOLUME_NAME` 2. glusterfs version you're running
-Krutika
On Sat, Mar 2, 2019 at 3:38 AM Drew R <drew.rash@gmail.com> wrote:
Saw some people asking for profile info. So I had started a migration from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and it's been running a while for a 100GB file thin provisioned with like 28GB actually used. Here is the profile info. I started the profiler like 5 minutes ago. The migration had been running for like 30minutes:
gluster volume profile gv2 info Brick: 10.30.30.122:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1189 8 12 No. of Writes: 4 3245 883
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 10 20 2 No. of Writes: 1087 312228 124080
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 1 52 No. of Writes: 5188 3617 5532
Block Size: 131072b+ No. of Reads: 70191 No. of Writes: 634192 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 202 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1297 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 41.58 us 18.00 us 79.00 us 24 OPENDIR 0.00 166.70 us 15.00 us 775.00 us 27 READDIRP 0.01 135.29 us 12.00 us 11695.00 us 221 STATFS 0.01 176.54 us 22.00 us 22944.00 us 364 FSTAT 0.02 626.21 us 13.00 us 17308.00 us 168 STAT 0.09 834.84 us 9.00 us 34337.00 us 607 LOOKUP 0.73 146.18 us 6.00 us 52255.00 us 29329 FINODELK 1.03 298.20 us 42.00 us 43711.00 us 20204 FXATTROP 15.38 8903.40 us 213.00 us 213832.00 us 10102 WRITE 39.14 26796.37 us 222.00 us 122696.00 us 8538 READ 43.59 39536.79 us 259.00 us 183630.00 us 6446 FSYNC
Duration: 15078 seconds Data Read: 9207377205 bytes Data Written: 86214017762 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 17 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 7 0 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 6 No. of Writes: 305 586 2359
Block Size: 131072b+ No. of Reads: 7162 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 40.05 us 18.00 us 79.00 us 20 OPENDIR 0.00 180.33 us 16.00 us 775.00 us 21 READDIRP 0.01 181.77 us 12.00 us 11695.00 us 149 STATFS 0.01 511.23 us 15.00 us 16954.00 us 111 STAT 0.01 210.22 us 22.00 us 22944.00 us 291 FSTAT 0.09 979.73 us 9.00 us 23539.00 us 404 LOOKUP 0.81 179.58 us 6.00 us 52255.00 us 19786 FINODELK 1.07 342.71 us 42.00 us 43711.00 us 13633 FXATTROP 14.18 9087.30 us 213.00 us 213832.00 us 6817 WRITE 40.00 40227.38 us 259.00 us 157934.00 us 4343 FSYNC 43.81 26601.57 us 222.00 us 118756.00 us 7192 READ
Duration: 177 seconds Data Read: 939534945 bytes Data Written: 363763200 bytes
Brick: 10.30.30.121:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 8b+ 256b+ 512b+ No. of Reads: 0 142626 329092 No. of Writes: 12 98 123230
Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 21175 18223 3345043 No. of Writes: 58646 69147 6092143
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 1090937 216037 748595 No. of Writes: 2030403 243915 240942
Block Size: 65536b+ 131072b+ No. of Reads: 1524667 4828662 No. of Writes: 605125 1889892 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 133 FORGET 0.00 0.00 us 0.00 us 0.00 us 4821 RELEASE 0.00 0.00 us 0.00 us 0.00 us 42742 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 35.92 us 1.00 us 76.00 us 24 OPENDIR 0.00 325.54 us 22.00 us 1117.00 us 13 READDIRP 0.00 482.13 us 19.00 us 12473.00 us 46 STAT 0.00 217.65 us 15.00 us 32841.00 us 221 STATFS 0.01 497.52 us 11.00 us 29179.00 us 539 LOOKUP 0.02 685.44 us 16.00 us 61131.00 us 615 FSTAT 0.04 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.45 451.95 us 51.00 us 65375.00 us 20206 FXATTROP 6.90 13906.03 us 225.00 us 210298.00 us 10103 WRITE 18.47 58375.96 us 389.00 us 218211.00 us 6446 FSYNC 27.45 27279.84 us 184.00 us 241444.00 us 20503 READ 46.66 32462.82 us 7.00 us 2750519.00 us 29283 FINODELK
Duration: 475152 seconds Data Read: 799630169437 bytes Data Written: 362708560796 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 53 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 891 616 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 288 750 52 No. of Writes: 304 586 2359
Block Size: 131072b+ No. of Reads: 9445 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 34.50 us 1.00 us 76.00 us 20 OPENDIR 0.00 55.16 us 19.00 us 128.00 us 25 STAT 0.00 363.18 us 25.00 us 1117.00 us 11 READDIRP 0.00 71.61 us 15.00 us 4323.00 us 149 STATFS 0.01 251.40 us 11.00 us 15344.00 us 336 LOOKUP 0.02 594.11 us 16.00 us 61131.00 us 450 FSTAT 0.07 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 20.03 56497.17 us 394.00 us 218211.00 us 4340 FSYNC 25.91 26229.86 us 184.00 us 241444.00 us 12095 READ 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
Duration: 177 seconds Data Read: 1298631621 bytes Data Written: 363744256 bytes
Brick: 10.30.30.123:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 18126558 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 383 FORGET 0.00 0.00 us 0.00 us 0.00 us 9602 RELEASE 0.00 0.00 us 0.00 us 0.00 us 70644 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.00 56.38 us 1.00 us 111.00 us 24 OPENDIR 0.02 66.23 us 28.00 us 146.00 us 113 FSTAT 0.17 143.88 us 13.00 us 444.00 us 539 LOOKUP 1.17 51.64 us 10.00 us 166.00 us 10103 WRITE 2.89 44.06 us 7.00 us 2177.00 us 29349 FINODELK 5.55 122.83 us 54.00 us 9281.00 us 20206 FXATTROP 90.19 6249.64 us 237.00 us 59996.00 us 6448 FSYNC
Duration: 772792 seconds Data Read: 0 bytes Data Written: 18126558 bytes
Interval 2 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 6816 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 58.10 us 1.00 us 111.00 us 20 OPENDIR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.03 66.23 us 28.00 us 146.00 us 113 FSTAT 0.16 145.84 us 13.00 us 346.00 us 336 LOOKUP 1.18 51.98 us 11.00 us 157.00 us 6816 WRITE 2.91 44.05 us 7.00 us 2177.00 us 19798 FINODELK 5.59 122.70 us 54.00 us 9281.00 us 13632 FXATTROP 90.11 6210.61 us 238.00 us 54711.00 us 4342 FSYNC
Duration: 177 seconds Data Read: 0 bytes Data Written: 6816 bytes _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHAUIMGRCVEDCL...

Hi, OK, thanks. I'd also asked for gluster version you're running. Could you share that information as well? -Krutika On Thu, Mar 7, 2019 at 9:38 PM Drew Rash <drew.rash@gmail.com> wrote:
Here is the output for our ssd gluster which exhibits the same issue as the hdd glusters. However, I can replicate the issue on an 8TB WD Gold disk NFS mounted as well ( removed the gluster part ) Which is the reason I'm on the oVirt site. I can start a file copy that writes at max speed, then after a gb or 2 it drops down to 3-10 MBps maxing at 13.3 ish overall. Testing outside of oVirt using dd doesn't have the same behavior. Outside ovirt (directly on the ovirtnode to the gluster or 8tb nfs mount results is max drive speeds consistently for large file copies)
I enabled writeback (as someone suggested) on the virtio-scsi windows disk and one of our windows 10 installs speed up. Still suffers from sustained write issue which causes the whole box to cripple. Opening chrome for example cripples the box or sql server management studio also.
Volume Name: gv1 Type: Replicate Volume ID: 7340a436-d971-4d69-84f9-12a23cd76ec8 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.30.30.121:/gluster_bricks/gv1/brick Brick2: 10.30.30.122:/gluster_bricks/gv1/brick Brick3: 10.30.30.123:/gluster_bricks/gv1/brick (arbiter) Options Reconfigured: network.ping-timeout: 30 performance.strict-o-direct: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off features.shard: off cluster.granular-entry-heal: enable transport.address-family: inet nfs.disable: on performance.client-io-threads: off
On Thu, Mar 7, 2019 at 1:00 AM Krutika Dhananjay <kdhananj@redhat.com> wrote:
So from the profile, it appears the XATTROPs and FINODELKs are way higher than the number of WRITEs:
<excerpt> ... ... %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
... ... </excerpt>
We'd noticed something similar in our internal tests and found inefficiencies in gluster's eager-lock implementation. This was fixed at https://review.gluster.org/c/glusterfs/+/19503. I need the two things I asked for in the prev mail to confirm if you're hitting the same issue.
-Krutika
On Thu, Mar 7, 2019 at 12:24 PM Krutika Dhananjay <kdhananj@redhat.com> wrote:
Hi,
Could you share the following pieces of information to begin with -
1. output of `gluster volume info $AFFECTED_VOLUME_NAME` 2. glusterfs version you're running
-Krutika
On Sat, Mar 2, 2019 at 3:38 AM Drew R <drew.rash@gmail.com> wrote:
Saw some people asking for profile info. So I had started a migration from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and it's been running a while for a 100GB file thin provisioned with like 28GB actually used. Here is the profile info. I started the profiler like 5 minutes ago. The migration had been running for like 30minutes:
gluster volume profile gv2 info Brick: 10.30.30.122:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1189 8 12 No. of Writes: 4 3245 883
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 10 20 2 No. of Writes: 1087 312228 124080
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 1 52 No. of Writes: 5188 3617 5532
Block Size: 131072b+ No. of Reads: 70191 No. of Writes: 634192 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 202 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1297 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 41.58 us 18.00 us 79.00 us 24 OPENDIR 0.00 166.70 us 15.00 us 775.00 us 27 READDIRP 0.01 135.29 us 12.00 us 11695.00 us 221 STATFS 0.01 176.54 us 22.00 us 22944.00 us 364 FSTAT 0.02 626.21 us 13.00 us 17308.00 us 168 STAT 0.09 834.84 us 9.00 us 34337.00 us 607 LOOKUP 0.73 146.18 us 6.00 us 52255.00 us 29329 FINODELK 1.03 298.20 us 42.00 us 43711.00 us 20204 FXATTROP 15.38 8903.40 us 213.00 us 213832.00 us 10102 WRITE 39.14 26796.37 us 222.00 us 122696.00 us 8538 READ 43.59 39536.79 us 259.00 us 183630.00 us 6446 FSYNC
Duration: 15078 seconds Data Read: 9207377205 bytes Data Written: 86214017762 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 17 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 7 0 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 6 No. of Writes: 305 586 2359
Block Size: 131072b+ No. of Reads: 7162 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 14.50 us 9.00 us 20.00 us 4 READDIR 0.00 38.00 us 9.00 us 120.00 us 7 GETXATTR 0.00 66.00 us 34.00 us 128.00 us 6 OPEN 0.00 137.25 us 52.00 us 195.00 us 4 SETATTR 0.00 23.19 us 11.00 us 46.00 us 26 INODELK 0.00 40.05 us 18.00 us 79.00 us 20 OPENDIR 0.00 180.33 us 16.00 us 775.00 us 21 READDIRP 0.01 181.77 us 12.00 us 11695.00 us 149 STATFS 0.01 511.23 us 15.00 us 16954.00 us 111 STAT 0.01 210.22 us 22.00 us 22944.00 us 291 FSTAT 0.09 979.73 us 9.00 us 23539.00 us 404 LOOKUP 0.81 179.58 us 6.00 us 52255.00 us 19786 FINODELK 1.07 342.71 us 42.00 us 43711.00 us 13633 FXATTROP 14.18 9087.30 us 213.00 us 213832.00 us 6817 WRITE 40.00 40227.38 us 259.00 us 157934.00 us 4343 FSYNC 43.81 26601.57 us 222.00 us 118756.00 us 7192 READ
Duration: 177 seconds Data Read: 939534945 bytes Data Written: 363763200 bytes
Brick: 10.30.30.121:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 8b+ 256b+ 512b+ No. of Reads: 0 142626 329092 No. of Writes: 12 98 123230
Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 21175 18223 3345043 No. of Writes: 58646 69147 6092143
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 1090937 216037 748595 No. of Writes: 2030403 243915 240942
Block Size: 65536b+ 131072b+ No. of Reads: 1524667 4828662 No. of Writes: 605125 1889892 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 133 FORGET 0.00 0.00 us 0.00 us 0.00 us 4821 RELEASE 0.00 0.00 us 0.00 us 0.00 us 42742 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 35.92 us 1.00 us 76.00 us 24 OPENDIR 0.00 325.54 us 22.00 us 1117.00 us 13 READDIRP 0.00 482.13 us 19.00 us 12473.00 us 46 STAT 0.00 217.65 us 15.00 us 32841.00 us 221 STATFS 0.01 497.52 us 11.00 us 29179.00 us 539 LOOKUP 0.02 685.44 us 16.00 us 61131.00 us 615 FSTAT 0.04 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.45 451.95 us 51.00 us 65375.00 us 20206 FXATTROP 6.90 13906.03 us 225.00 us 210298.00 us 10103 WRITE 18.47 58375.96 us 389.00 us 218211.00 us 6446 FSYNC 27.45 27279.84 us 184.00 us 241444.00 us 20503 READ 46.66 32462.82 us 7.00 us 2750519.00 us 29283 FINODELK
Duration: 475152 seconds Data Read: 799630169437 bytes Data Written: 362708560796 bytes
Interval 2 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 53 0 0 No. of Writes: 0 43 7
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 891 616 No. of Writes: 16 1881 1010
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 288 750 52 No. of Writes: 304 586 2359
Block Size: 131072b+ No. of Reads: 9445 No. of Writes: 610 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 28.50 us 12.00 us 61.00 us 4 READDIR 0.00 71.75 us 59.00 us 88.00 us 4 SETATTR 0.00 44.57 us 12.00 us 94.00 us 7 GETXATTR 0.00 62.17 us 42.00 us 89.00 us 6 OPEN 0.00 34.50 us 1.00 us 76.00 us 20 OPENDIR 0.00 55.16 us 19.00 us 128.00 us 25 STAT 0.00 363.18 us 25.00 us 1117.00 us 11 READDIRP 0.00 71.61 us 15.00 us 4323.00 us 149 STATFS 0.01 251.40 us 11.00 us 15344.00 us 336 LOOKUP 0.02 594.11 us 16.00 us 61131.00 us 450 FSTAT 0.07 31691.92 us 17.00 us 349648.00 us 26 INODELK 0.43 384.83 us 51.00 us 65375.00 us 13632 FXATTROP 7.54 13535.70 us 225.00 us 210298.00 us 6816 WRITE 20.03 56497.17 us 394.00 us 218211.00 us 4340 FSYNC 25.91 26229.86 us 184.00 us 241444.00 us 12095 READ 45.99 28508.86 us 7.00 us 2591280.00 us 19751 FINODELK
Duration: 177 seconds Data Read: 1298631621 bytes Data Written: 363744256 bytes
Brick: 10.30.30.123:/gluster_bricks/gv2/brick --------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 18126558 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 383 FORGET 0.00 0.00 us 0.00 us 0.00 us 9602 RELEASE 0.00 0.00 us 0.00 us 0.00 us 70644 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.00 56.38 us 1.00 us 111.00 us 24 OPENDIR 0.02 66.23 us 28.00 us 146.00 us 113 FSTAT 0.17 143.88 us 13.00 us 444.00 us 539 LOOKUP 1.17 51.64 us 10.00 us 166.00 us 10103 WRITE 2.89 44.06 us 7.00 us 2177.00 us 29349 FINODELK 5.55 122.83 us 54.00 us 9281.00 us 20206 FXATTROP 90.19 6249.64 us 237.00 us 59996.00 us 6448 FSYNC
Duration: 772792 seconds Data Read: 0 bytes Data Written: 18126558 bytes
Interval 2 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 6816 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 6 RELEASE 0.00 0.00 us 0.00 us 0.00 us 20 RELEASEDIR 0.00 20.75 us 13.00 us 29.00 us 4 READDIR 0.00 87.00 us 72.00 us 129.00 us 4 SETATTR 0.00 71.00 us 46.00 us 93.00 us 6 OPEN 0.00 66.00 us 9.00 us 114.00 us 7 GETXATTR 0.00 58.10 us 1.00 us 111.00 us 20 OPENDIR 0.00 50.58 us 16.00 us 90.00 us 26 INODELK 0.03 66.23 us 28.00 us 146.00 us 113 FSTAT 0.16 145.84 us 13.00 us 346.00 us 336 LOOKUP 1.18 51.98 us 11.00 us 157.00 us 6816 WRITE 2.91 44.05 us 7.00 us 2177.00 us 19798 FINODELK 5.59 122.70 us 54.00 us 9281.00 us 13632 FXATTROP 90.11 6210.61 us 238.00 us 54711.00 us 4342 FSYNC
Duration: 177 seconds Data Read: 0 bytes Data Written: 6816 bytes _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHAUIMGRCVEDCL...
participants (6)
-
Donny Davis
-
Drew R
-
Drew Rash
-
drew.rash@gmail.com
-
Jayme
-
Krutika Dhananjay