Gluster problems, cluster performance issues

Hello: I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up. My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output: [root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8 Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8 Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8 --------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing. When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain. I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this? Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766] (the process and PID are often different) I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed... Thanks! --Jim

[Adding gluster-users to look at the heal issue] On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/

Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well. At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems... I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related? I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way. --Jim On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/

Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling? On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/

I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2): May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot). I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster. Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s). How do I do "volume profiling"? Thanks! On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/

On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/

I would check disks status and accessibility of mount points where your gluster volumes reside. On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LEV6ZQ3JV2XLA...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACO7RFSLBSRBAI...

Thank you for your response. I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines. The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk). What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case. I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues. I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues. Am I correct in believing that once the failed disk was brought offline that performance should return to normal? On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR| unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
> Hello: > > I've been having some cluster and gluster performance issues > lately. I also found that my cluster was out of date, and was trying to > apply updates (hoping to fix some of these), and discovered the ovirt 4.1 > repos were taken completely offline. So, I was forced to begin an upgrade > to 4.2. According to docs I found/read, I needed only add the new repo, do > a yum update, reboot, and be good on my hosts (did the yum update, the > engine-setup on my hosted engine). Things seemed to work relatively well, > except for a gluster sync issue that showed up. > > My cluster is a 3 node hyperconverged cluster. I upgraded the > hosted engine first, then engine 3. When engine 3 came back up, for some > reason one of my gluster volumes would not sync. Here's sample output: > > [root@ovirt3 ~]# gluster volume heal data-hdd info > Brick 172.172.1.11:/gluster/brick3/data-hdd > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- > 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- > f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- > b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- > d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- > 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- > 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- > e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- > b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 > Status: Connected > Number of entries: 8 > > Brick 172.172.1.12:/gluster/brick3/data-hdd > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- > 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- > f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- > 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- > b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- > d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- > e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- > b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- > 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b > Status: Connected > Number of entries: 8 > > Brick 172.172.1.13:/gluster/brick3/data-hdd > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62- > 0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147- > e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435- > b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa- > 6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733- > f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8- > 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1- > b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba > /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567- > d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 > Status: Connected > Number of entries: 8 > > --------- > Its been in this state for a couple days now, and bandwidth > monitoring shows no appreciable data moving. I've tried repeatedly > commanding a full heal from all three clusters in the node. Its always the > same files that need healing. > > When running gluster volume heal data-hdd statistics, I see > sometimes different information, but always some number of "heal failed" > entries. It shows 0 for split brain. > > I'm not quite sure what to do. I suspect it may be due to nodes 1 > and 2 still being on the older ovirt/gluster release, but I'm afraid to > upgrade and reboot them until I have a good gluster sync (don't need to > create a split brain issue). How do I proceed with this? > > Second issue: I've been experiencing VERY POOR performance on most > of my VMs. To the tune that logging into a windows 10 vm via remote > desktop can take 5 minutes, launching quickbooks inside said vm can easily > take 10 minutes. On some linux VMs, I get random messages like this: > Message from syslogd@unifi at May 28 20:39:23 ... > kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 > stuck for 22s! [mongod:14766] > > (the process and PID are often different) > > I'm not quite sure what to do about this either. My initial thought > was upgrad everything to current and see if its still there, but I cannot > move forward with that until my gluster is healed... > > Thanks! > --Jim > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/ > community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/ > archives/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIR > QF/ > >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/

I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks: [root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976 Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242 Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes ----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285 Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285 Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
> [Adding gluster-users to look at the heal issue] > > On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Hello: >> >> I've been having some cluster and gluster performance issues >> lately. I also found that my cluster was out of date, and was trying to >> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >> repos were taken completely offline. So, I was forced to begin an upgrade >> to 4.2. According to docs I found/read, I needed only add the new repo, do >> a yum update, reboot, and be good on my hosts (did the yum update, the >> engine-setup on my hosted engine). Things seemed to work relatively well, >> except for a gluster sync issue that showed up. >> >> My cluster is a 3 node hyperconverged cluster. I upgraded the >> hosted engine first, then engine 3. When engine 3 came back up, for some >> reason one of my gluster volumes would not sync. Here's sample output: >> >> [root@ovirt3 ~]# gluster volume heal data-hdd info >> Brick 172.172.1.11:/gluster/brick3/data-hdd >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- >> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- >> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- >> 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- >> 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- >> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- >> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- >> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- >> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >> Status: Connected >> Number of entries: 8 >> >> Brick 172.172.1.12:/gluster/brick3/data-hdd >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- >> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- >> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- >> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- >> 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- >> 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- >> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- >> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- >> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >> Status: Connected >> Number of entries: 8 >> >> Brick 172.172.1.13:/gluster/brick3/data-hdd >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05- >> 4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6- >> 4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2- >> 4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2- >> 42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153- >> 4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5- >> 4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be- >> 446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c- >> 44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >> Status: Connected >> Number of entries: 8 >> >> --------- >> Its been in this state for a couple days now, and bandwidth >> monitoring shows no appreciable data moving. I've tried repeatedly >> commanding a full heal from all three clusters in the node. Its always the >> same files that need healing. >> >> When running gluster volume heal data-hdd statistics, I see >> sometimes different information, but always some number of "heal failed" >> entries. It shows 0 for split brain. >> >> I'm not quite sure what to do. I suspect it may be due to nodes 1 >> and 2 still being on the older ovirt/gluster release, but I'm afraid to >> upgrade and reboot them until I have a good gluster sync (don't need to >> create a split brain issue). How do I proceed with this? >> >> Second issue: I've been experiencing VERY POOR performance on most >> of my VMs. To the tune that logging into a windows 10 vm via remote >> desktop can take 5 minutes, launching quickbooks inside said vm can easily >> take 10 minutes. On some linux VMs, I get random messages like this: >> Message from syslogd@unifi at May 28 20:39:23 ... >> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >> stuck for 22s! [mongod:14766] >> >> (the process and PID are often different) >> >> I'm not quite sure what to do about this either. My initial >> thought was upgrad everything to current and see if its still there, but I >> cannot move forward with that until my gluster is healed... >> >> Thanks! >> --Jim >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/

I also finally found the following in my system log on one server: [10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x23/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000 -------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too.... I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct? --Jim On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
> Ok, things have gotten MUCH worse this morning. I'm getting random > errors from VMs, right now, about a third of my VMs have been paused due to > storage issues, and most of the remaining VMs are not performing well. > > At this point, I am in full EMERGENCY mode, as my production > services are now impacted, and I'm getting calls coming in with problems... > > I'd greatly appreciate help...VMs are running VERY slowly (when they > run), and they are steadily getting worse. I don't know why. I was seeing > CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a > time (while the VM became unresponsive and any VMs I was logged into that > were linux were giving me the CPU stuck messages in my origional post). Is > all this storage related? > > I also have two different gluster volumes for VM storage, and only > one had the issues, but now VMs in both are being affected at the same time > and same way. > > --Jim > > On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> > wrote: > >> [Adding gluster-users to look at the heal issue] >> >> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Hello: >>> >>> I've been having some cluster and gluster performance issues >>> lately. I also found that my cluster was out of date, and was trying to >>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>> repos were taken completely offline. So, I was forced to begin an upgrade >>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>> a yum update, reboot, and be good on my hosts (did the yum update, the >>> engine-setup on my hosted engine). Things seemed to work relatively well, >>> except for a gluster sync issue that showed up. >>> >>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>> hosted engine first, then engine 3. When engine 3 came back up, for some >>> reason one of my gluster volumes would not sync. Here's sample output: >>> >>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>> Status: Connected >>> Number of entries: 8 >>> >>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>> Status: Connected >>> Number of entries: 8 >>> >>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>> Status: Connected >>> Number of entries: 8 >>> >>> --------- >>> Its been in this state for a couple days now, and bandwidth >>> monitoring shows no appreciable data moving. I've tried repeatedly >>> commanding a full heal from all three clusters in the node. Its always the >>> same files that need healing. >>> >>> When running gluster volume heal data-hdd statistics, I see >>> sometimes different information, but always some number of "heal failed" >>> entries. It shows 0 for split brain. >>> >>> I'm not quite sure what to do. I suspect it may be due to nodes 1 >>> and 2 still being on the older ovirt/gluster release, but I'm afraid to >>> upgrade and reboot them until I have a good gluster sync (don't need to >>> create a split brain issue). How do I proceed with this? >>> >>> Second issue: I've been experiencing VERY POOR performance on most >>> of my VMs. To the tune that logging into a windows 10 vm via remote >>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>> take 10 minutes. On some linux VMs, I get random messages like this: >>> Message from syslogd@unifi at May 28 20:39:23 ... >>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>> stuck for 22s! [mongod:14766] >>> >>> (the process and PID are often different) >>> >>> I'm not quite sure what to do about this either. My initial >>> thought was upgrad everything to current and see if its still there, but I >>> cannot move forward with that until my gluster is healed... >>> >>> Thanks! >>> --Jim >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/

Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200 scsi: virtio_scsi: let host do exception handling virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline. Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver. Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too. [mkp: fixed typo] Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Adding Paolo/Kevin to comment. As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference: # gluster volume set <VOL> cluster.eager-lock off Do also capture the volume profile again if you still see performance issues after disabling eager-lock. -Krutika On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+ 0x23/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
> Do you see errors reported in the mount logs for the volume? If so, > could you attach the logs? > Any issues with your underlying disks. Can you also attach output of > volume profiling? > > On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Ok, things have gotten MUCH worse this morning. I'm getting random >> errors from VMs, right now, about a third of my VMs have been paused due to >> storage issues, and most of the remaining VMs are not performing well. >> >> At this point, I am in full EMERGENCY mode, as my production >> services are now impacted, and I'm getting calls coming in with problems... >> >> I'd greatly appreciate help...VMs are running VERY slowly (when >> they run), and they are steadily getting worse. I don't know why. I was >> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >> minutes at a time (while the VM became unresponsive and any VMs I was >> logged into that were linux were giving me the CPU stuck messages in my >> origional post). Is all this storage related? >> >> I also have two different gluster volumes for VM storage, and only >> one had the issues, but now VMs in both are being affected at the same time >> and same way. >> >> --Jim >> >> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> [Adding gluster-users to look at the heal issue] >>> >>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> Hello: >>>> >>>> I've been having some cluster and gluster performance issues >>>> lately. I also found that my cluster was out of date, and was trying to >>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>> except for a gluster sync issue that showed up. >>>> >>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>> reason one of my gluster volumes would not sync. Here's sample output: >>>> >>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> --------- >>>> Its been in this state for a couple days now, and bandwidth >>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>> commanding a full heal from all three clusters in the node. Its always the >>>> same files that need healing. >>>> >>>> When running gluster volume heal data-hdd statistics, I see >>>> sometimes different information, but always some number of "heal failed" >>>> entries. It shows 0 for split brain. >>>> >>>> I'm not quite sure what to do. I suspect it may be due to nodes >>>> 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to >>>> upgrade and reboot them until I have a good gluster sync (don't need to >>>> create a split brain issue). How do I proceed with this? >>>> >>>> Second issue: I've been experiencing VERY POOR performance on >>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>> stuck for 22s! [mongod:14766] >>>> >>>> (the process and PID are often different) >>>> >>>> I'm not quite sure what to do about this either. My initial >>>> thought was upgrad everything to current and see if its still there, but I >>>> cannot move forward with that until my gluster is healed... >>>> >>>> Thanks! >>>> --Jim >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>> >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/

@Jim Kusznir For the heal issue, can you provide the getfattr output of one of the 8 files in question from all 3 bricks? Example: `getfattr -d -m . -e hex /gluster/brick3/data-hdd/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9` Also provide the stat output of the same file from all 3 bricks. Thanks, Ravi On 05/30/2018 09:47 AM, Krutika Dhananjay wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com <mailto:dougmill@linux.vnet.ibm.com>> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com <mailto:jejb@linux.vnet.ibm.com>> Cc: "Martin K. Petersen" <martin.petersen@oracle.com <mailto:martin.petersen@oracle.com>> Cc: Hannes Reinecke <hare@suse.de <mailto:hare@suse.de>> Cc:linux-scsi@vger.kernel.org <mailto:linux-scsi@vger.kernel.org> Cc:stable@vger.kernel.org <mailto:stable@vger.kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com <mailto:pbonzini@redhat.com>> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com <mailto:martin.petersen@oracle.com>>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x23/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976 Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242 Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285 Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285 Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com <mailto:rightkicktech@gmail.com>> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com <mailto:sabose@redhat.com>> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com <mailto:sabose@redhat.com>> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com <mailto:jim@palousetech.com>> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba
/cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625
Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LEV6ZQ3JV2XLA... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACO7RFSLBSRBAI... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DEQQLJM3WHQNZ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/>

Hi: Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume: Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291 Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291 Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333 Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333 Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes gluster> volume info engine Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
> I see in messages on ovirt3 (my 3rd machine, the one upgraded to > 4.2): > > May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > (appears a lot). > > I also found on the ssh session of that, some sysv warnings about > the backing disk for one of the gluster volumes (straight replica 3). The > glusterfs process for that disk on that machine went offline. Its my > understanding that it should continue to work with the other two machines > while I attempt to replace that disk, right? Attempted writes (touching an > empty file) can take 15 seconds, repeating it later will be much faster. > > Gluster generates a bunch of different log files, I don't know what > ones you want, or from which machine(s). > > How do I do "volume profiling"? > > Thanks! > > On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> > wrote: > >> Do you see errors reported in the mount logs for the volume? If so, >> could you attach the logs? >> Any issues with your underlying disks. Can you also attach output >> of volume profiling? >> >> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Ok, things have gotten MUCH worse this morning. I'm getting >>> random errors from VMs, right now, about a third of my VMs have been paused >>> due to storage issues, and most of the remaining VMs are not performing >>> well. >>> >>> At this point, I am in full EMERGENCY mode, as my production >>> services are now impacted, and I'm getting calls coming in with problems... >>> >>> I'd greatly appreciate help...VMs are running VERY slowly (when >>> they run), and they are steadily getting worse. I don't know why. I was >>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>> minutes at a time (while the VM became unresponsive and any VMs I was >>> logged into that were linux were giving me the CPU stuck messages in my >>> origional post). Is all this storage related? >>> >>> I also have two different gluster volumes for VM storage, and only >>> one had the issues, but now VMs in both are being affected at the same time >>> and same way. >>> >>> --Jim >>> >>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >>> wrote: >>> >>>> [Adding gluster-users to look at the heal issue] >>>> >>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> Hello: >>>>> >>>>> I've been having some cluster and gluster performance issues >>>>> lately. I also found that my cluster was out of date, and was trying to >>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>> except for a gluster sync issue that showed up. >>>>> >>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>> >>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> --------- >>>>> Its been in this state for a couple days now, and bandwidth >>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>> commanding a full heal from all three clusters in the node. Its always the >>>>> same files that need healing. >>>>> >>>>> When running gluster volume heal data-hdd statistics, I see >>>>> sometimes different information, but always some number of "heal failed" >>>>> entries. It shows 0 for split brain. >>>>> >>>>> I'm not quite sure what to do. I suspect it may be due to nodes >>>>> 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to >>>>> upgrade and reboot them until I have a good gluster sync (don't need to >>>>> create a split brain issue). How do I proceed with this? >>>>> >>>>> Second issue: I've been experiencing VERY POOR performance on >>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>>> stuck for 22s! [mongod:14766] >>>>> >>>>> (the process and PID are often different) >>>>> >>>>> I'm not quite sure what to do about this either. My initial >>>>> thought was upgrad everything to current and see if its still there, but I >>>>> cannot move forward with that until my gluster is healed... >>>>> >>>>> Thanks! >>>>> --Jim >>>>> >>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>> y/about/community-guidelines/ >>>>> List Archives: https://lists.ovirt.org/archiv >>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>> IRQF/ >>>>> >>>>> >>>> >>> >> > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/

The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config? -Krutika On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim@palousetech.com> wrote:
Hi:
Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume:
Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
gluster> volume info engine
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on
On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
> On one ovirt server, I'm now seeing these messages: > [56474.239725] blk_update_request: 63 callbacks suppressed > [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 > [56474.240602] blk_update_request: I/O error, dev dm-2, sector > 3905945472 > [56474.241346] blk_update_request: I/O error, dev dm-2, sector > 3905945584 > [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 > [56474.243072] blk_update_request: I/O error, dev dm-2, sector > 3905943424 > [56474.243997] blk_update_request: I/O error, dev dm-2, sector > 3905943536 > [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 > [56474.248315] blk_update_request: I/O error, dev dm-2, sector > 3905945472 > [56474.249231] blk_update_request: I/O error, dev dm-2, sector > 3905945584 > [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 > > > > > On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >> 4.2): >> >> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> (appears a lot). >> >> I also found on the ssh session of that, some sysv warnings about >> the backing disk for one of the gluster volumes (straight replica 3). The >> glusterfs process for that disk on that machine went offline. Its my >> understanding that it should continue to work with the other two machines >> while I attempt to replace that disk, right? Attempted writes (touching an >> empty file) can take 15 seconds, repeating it later will be much faster. >> >> Gluster generates a bunch of different log files, I don't know what >> ones you want, or from which machine(s). >> >> How do I do "volume profiling"? >> >> Thanks! >> >> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> Do you see errors reported in the mount logs for the volume? If >>> so, could you attach the logs? >>> Any issues with your underlying disks. Can you also attach output >>> of volume profiling? >>> >>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>> random errors from VMs, right now, about a third of my VMs have been paused >>>> due to storage issues, and most of the remaining VMs are not performing >>>> well. >>>> >>>> At this point, I am in full EMERGENCY mode, as my production >>>> services are now impacted, and I'm getting calls coming in with problems... >>>> >>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>> they run), and they are steadily getting worse. I don't know why. I was >>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>> logged into that were linux were giving me the CPU stuck messages in my >>>> origional post). Is all this storage related? >>>> >>>> I also have two different gluster volumes for VM storage, and >>>> only one had the issues, but now VMs in both are being affected at the same >>>> time and same way. >>>> >>>> --Jim >>>> >>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >>>> wrote: >>>> >>>>> [Adding gluster-users to look at the heal issue] >>>>> >>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Hello: >>>>>> >>>>>> I've been having some cluster and gluster performance issues >>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>> except for a gluster sync issue that showed up. >>>>>> >>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>>> >>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> --------- >>>>>> Its been in this state for a couple days now, and bandwidth >>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>> same files that need healing. >>>>>> >>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>> sometimes different information, but always some number of "heal failed" >>>>>> entries. It shows 0 for split brain. >>>>>> >>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>> need to create a split brain issue). How do I proceed with this? >>>>>> >>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>>>> stuck for 22s! [mongod:14766] >>>>>> >>>>>> (the process and PID are often different) >>>>>> >>>>>> I'm not quite sure what to do about this either. My initial >>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>> cannot move forward with that until my gluster is healed... >>>>>> >>>>>> Thanks! >>>>>> --Jim >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list -- users@ovirt.org >>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>> y/about/community-guidelines/ >>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>> IRQF/ >>>>>> >>>>>> >>>>> >>>> >>> >> > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/communit > y/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archiv > es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/

Hi all again: I'm now subscribed to gluster-users as well, so I should get any replies from that side too. At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces in xfs_log_worker or other gluster processes, as well as hung task timeout messages. As to the profile suggesting ovirt1 had poorer performance than ovirt2, I don't have an explanation. gluster volume info engine on both hosts are identical. The computers and drives are identical (Dell R610 with PERC 6/i controller configured to pass through the drive). ovirt1 and ovirt2's partiion scheme/map do vary somewhat, but I figured that wouldn't be a big deal and gluster just uses the least common denominator. Is that accurate? In case it was missed, here are the dmesg errors being thrown by gluster (as far as I can tell). They definately started after I upgraded from gluster 3.8 to 3.12: [ 6604.536156] perf: interrupt took too long (9593 > 9565), lowering kernel.perf_event_max_sample_rate to 20000 [ 8280.188734] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8280.188787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.188837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189067] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8280.189100] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189105] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8280.189109] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189117] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8280.189121] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189161] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8280.189207] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189256] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8280.189260] Call Trace: [ 8280.189265] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189300] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8280.189360] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8280.189404] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189453] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8280.189457] Call Trace: [ 8280.189476] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8280.189480] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189484] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8280.189494] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189597] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8280.189600] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189604] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189608] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186024] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8400.186077] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186127] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8400.186133] Call Trace: [ 8400.186146] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8400.186153] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186221] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8400.186227] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186260] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8400.186292] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.186324] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186356] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8400.186388] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186394] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.186398] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186405] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.186409] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186450] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8400.186496] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186545] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8400.186549] Call Trace: [ 8400.186554] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186589] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8400.186593] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186622] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8400.186629] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186634] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8400.186637] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186641] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186645] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186649] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8400.186693] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186741] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8400.186746] Call Trace: [ 8400.186764] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8400.186768] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186772] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.186782] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8400.186786] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8400.186790] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8400.186795] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186800] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8400.186833] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8400.186862] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8400.186879] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186884] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8400.186887] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186891] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186895] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186912] INFO: task kworker/3:4:3191 blocked for more than 120 seconds. [ 8400.186957] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.187006] kworker/3:4 D ffff931faf598fd0 0 3191 2 0x00000080 [ 8400.187041] Workqueue: xfs-sync/dm-10 xfs_log_worker [xfs] [ 8400.187043] Call Trace: [ 8400.187050] [<ffffffff9614e45f>] ? delayacct_end+0x8f/0xb0 [ 8400.187055] [<ffffffff960dbbc8>] ? update_group_power+0x28/0x280 [ 8400.187059] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.187062] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.187066] [<ffffffff9671432d>] wait_for_completion+0xfd/0x140 [ 8400.187071] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.187075] [<ffffffff960b418d>] flush_work+0xfd/0x190 [ 8400.187079] [<ffffffff960b06e0>] ? move_linked_works+0x90/0x90 [ 8400.187110] [<ffffffffc04a1d8a>] xlog_cil_force_lsn+0x8a/0x210 [xfs] [ 8400.187142] [<ffffffffc049fcf5>] _xfs_log_force+0x85/0x2c0 [xfs] [ 8400.187174] [<ffffffffc049ffd6>] ? xfs_log_worker+0x36/0x100 [xfs] [ 8400.187206] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs] [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440 [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0 [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0 [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.187265] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [13493.643624] perf: interrupt took too long (12026 > 11991), lowering kernel.perf_event_max_sample_rate to 16000 On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim@palousetech.com> wrote:
Hi:
Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume:
Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
gluster> volume info engine
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on
On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
> I would check disks status and accessibility of mount points where > your gluster volumes reside. > > On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote: > >> On one ovirt server, I'm now seeing these messages: >> [56474.239725] blk_update_request: 63 callbacks suppressed >> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >> 3905943424 >> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >> 3905943536 >> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >> >> >> >> >> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>> 4.2): >>> >>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> (appears a lot). >>> >>> I also found on the ssh session of that, some sysv warnings about >>> the backing disk for one of the gluster volumes (straight replica 3). The >>> glusterfs process for that disk on that machine went offline. Its my >>> understanding that it should continue to work with the other two machines >>> while I attempt to replace that disk, right? Attempted writes (touching an >>> empty file) can take 15 seconds, repeating it later will be much faster. >>> >>> Gluster generates a bunch of different log files, I don't know >>> what ones you want, or from which machine(s). >>> >>> How do I do "volume profiling"? >>> >>> Thanks! >>> >>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>> wrote: >>> >>>> Do you see errors reported in the mount logs for the volume? If >>>> so, could you attach the logs? >>>> Any issues with your underlying disks. Can you also attach output >>>> of volume profiling? >>>> >>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>> due to storage issues, and most of the remaining VMs are not performing >>>>> well. >>>>> >>>>> At this point, I am in full EMERGENCY mode, as my production >>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>> >>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>> origional post). Is all this storage related? >>>>> >>>>> I also have two different gluster volumes for VM storage, and >>>>> only one had the issues, but now VMs in both are being affected at the same >>>>> time and same way. >>>>> >>>>> --Jim >>>>> >>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> [Adding gluster-users to look at the heal issue] >>>>>> >>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Hello: >>>>>>> >>>>>>> I've been having some cluster and gluster performance issues >>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>> except for a gluster sync issue that showed up. >>>>>>> >>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>>>> >>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> --------- >>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>> same files that need healing. >>>>>>> >>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>> entries. It shows 0 for split brain. >>>>>>> >>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>> >>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>> >>>>>>> (the process and PID are often different) >>>>>>> >>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>> cannot move forward with that until my gluster is healed... >>>>>>> >>>>>>> Thanks! >>>>>>> --Jim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>> y/about/community-guidelines/ >>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>> IRQF/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/

I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt. In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although somehow, I got machines from the data domain to start), and there's no obvious way I've seen to tell them to bring it online. It claims none of the servers can see that volume, but I've quadruple-checked that the volumes are mounted on the engines and are fully functional there. I have some more VMs I need to get back up and running. How do I fix this? ovirt1, for unknown reasons, will not work. Attempts to bring it online fail, and I haven't figured out what log file to look in yet for more details. On Wed, May 30, 2018 at 9:36 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hi all again:
I'm now subscribed to gluster-users as well, so I should get any replies from that side too.
At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces in xfs_log_worker or other gluster processes, as well as hung task timeout messages.
As to the profile suggesting ovirt1 had poorer performance than ovirt2, I don't have an explanation. gluster volume info engine on both hosts are identical. The computers and drives are identical (Dell R610 with PERC 6/i controller configured to pass through the drive). ovirt1 and ovirt2's partiion scheme/map do vary somewhat, but I figured that wouldn't be a big deal and gluster just uses the least common denominator. Is that accurate?
In case it was missed, here are the dmesg errors being thrown by gluster (as far as I can tell). They definately started after I upgraded from gluster 3.8 to 3.12:
[ 6604.536156] perf: interrupt took too long (9593 > 9565), lowering kernel.perf_event_max_sample_rate to 20000 [ 8280.188734] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8280.188787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.188837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189067] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8280.189100] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189105] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8280.189109] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189117] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8280.189121] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189161] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8280.189207] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189256] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8280.189260] Call Trace: [ 8280.189265] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189300] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8280.189360] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8280.189404] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189453] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8280.189457] Call Trace: [ 8280.189476] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8280.189480] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189484] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8280.189494] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189597] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8280.189600] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189604] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189608] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186024] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8400.186077] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186127] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8400.186133] Call Trace: [ 8400.186146] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8400.186153] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186221] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8400.186227] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186260] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8400.186292] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.186324] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186356] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8400.186388] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186394] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.186398] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186405] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.186409] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186450] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8400.186496] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186545] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8400.186549] Call Trace: [ 8400.186554] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186589] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8400.186593] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186622] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8400.186629] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186634] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8400.186637] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186641] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186645] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186649] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8400.186693] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186741] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8400.186746] Call Trace: [ 8400.186764] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8400.186768] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186772] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.186782] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8400.186786] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8400.186790] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8400.186795] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186800] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8400.186833] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8400.186862] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8400.186879] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186884] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/0x160 [ 8400.186887] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186891] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186895] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/0x160 [ 8400.186912] INFO: task kworker/3:4:3191 blocked for more than 120 seconds. [ 8400.186957] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.187006] kworker/3:4 D ffff931faf598fd0 0 3191 2 0x00000080 [ 8400.187041] Workqueue: xfs-sync/dm-10 xfs_log_worker [xfs] [ 8400.187043] Call Trace: [ 8400.187050] [<ffffffff9614e45f>] ? delayacct_end+0x8f/0xb0 [ 8400.187055] [<ffffffff960dbbc8>] ? update_group_power+0x28/0x280 [ 8400.187059] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.187062] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.187066] [<ffffffff9671432d>] wait_for_completion+0xfd/0x140 [ 8400.187071] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.187075] [<ffffffff960b418d>] flush_work+0xfd/0x190 [ 8400.187079] [<ffffffff960b06e0>] ? move_linked_works+0x90/0x90 [ 8400.187110] [<ffffffffc04a1d8a>] xlog_cil_force_lsn+0x8a/0x210 [xfs] [ 8400.187142] [<ffffffffc049fcf5>] _xfs_log_force+0x85/0x2c0 [xfs] [ 8400.187174] [<ffffffffc049ffd6>] ? xfs_log_worker+0x36/0x100 [xfs] [ 8400.187206] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs] [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440 [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0 [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0 [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.187265] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [13493.643624] perf: interrupt took too long (12026 > 11991), lowering kernel.perf_event_max_sample_rate to 16000
On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim@palousetech.com> wrote:
Hi:
Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume:
Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
gluster> volume info engine
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on
On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Thank you for your response. > > I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica > bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is > replica 3, with a brick on all three ovirt machines. > > The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD > (same in all three machines). On ovirt3, the SSHD has reported hard IO > failures, and that brick is offline. However, the other two replicas are > fully operational (although they still show contents in the heal info > command that won't go away, but that may be the case until I replace the > failed disk). > > What is bothering me is that ALL 4 gluster volumes are showing > horrible performance issues. At this point, as the bad disk has been > completely offlined, I would expect gluster to perform at normal speed, but > that is definitely not the case. > > I've also noticed that the performance hits seem to come in waves: > things seem to work acceptably (but slow) for a while, then suddenly, its > as if all disk IO on all volumes (including non-gluster local OS disk > volumes for the hosts) pause for about 30 seconds, then IO resumes again. > During those times, I start getting VM not responding and host not > responding notices as well as the applications having major issues. > > I've shut down most of my VMs and am down to just my essential core > VMs (shedded about 75% of my VMs). I still am experiencing the same issues. > > Am I correct in believing that once the failed disk was brought > offline that performance should return to normal? > > On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> > wrote: > >> I would check disks status and accessibility of mount points where >> your gluster volumes reside. >> >> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> On one ovirt server, I'm now seeing these messages: >>> [56474.239725] blk_update_request: 63 callbacks suppressed >>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>> 3905943424 >>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>> 3905943536 >>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >>> >>> >>> >>> >>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>> 4.2): >>>> >>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> (appears a lot). >>>> >>>> I also found on the ssh session of that, some sysv warnings about >>>> the backing disk for one of the gluster volumes (straight replica 3). The >>>> glusterfs process for that disk on that machine went offline. Its my >>>> understanding that it should continue to work with the other two machines >>>> while I attempt to replace that disk, right? Attempted writes (touching an >>>> empty file) can take 15 seconds, repeating it later will be much faster. >>>> >>>> Gluster generates a bunch of different log files, I don't know >>>> what ones you want, or from which machine(s). >>>> >>>> How do I do "volume profiling"? >>>> >>>> Thanks! >>>> >>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>>> wrote: >>>> >>>>> Do you see errors reported in the mount logs for the volume? If >>>>> so, could you attach the logs? >>>>> Any issues with your underlying disks. Can you also attach >>>>> output of volume profiling? >>>>> >>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>> well. >>>>>> >>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>> >>>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>> origional post). Is all this storage related? >>>>>> >>>>>> I also have two different gluster volumes for VM storage, and >>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>> time and same way. >>>>>> >>>>>> --Jim >>>>>> >>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>> sabose@redhat.com> wrote: >>>>>> >>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>> >>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Hello: >>>>>>>> >>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>> except for a gluster sync issue that showed up. >>>>>>>> >>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>> >>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> --------- >>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>> same files that need healing. >>>>>>>> >>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>> entries. It shows 0 for split brain. >>>>>>>> >>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>> >>>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>> >>>>>>>> (the process and PID are often different) >>>>>>>> >>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>> >>>>>>>> Thanks! >>>>>>>> --Jim >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>> y/about/community-guidelines/ >>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>> IRQF/ >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/

On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt.
In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although somehow, I got machines from the data domain to start), and there's no obvious way I've seen to tell them to bring it online. It claims none of the servers can see that volume, but I've quadruple-checked that the volumes are mounted on the engines and are fully functional there. I have some more VMs I need to get back up and running. How do I fix this?
ovirt1, for unknown reasons, will not work. Attempts to bring it online fail, and I haven't figured out what log file to look in yet for more details.
As Krutika mentioned before, the storage performance issues seems to stem from the slow brick on ovirt1 - ovirt1.nwfiber.com:/gluster/brick1/engine Have you checked this? When you say ovirt1 will not work - do you mean the host cannot be activated in oVirt engine? Can you look at/share the engine.log (on the HE VM) to understand why it won't. Also if the storage domain is not accessible by any of the servers, check/share the vdsm.log to understand the error.
On Wed, May 30, 2018 at 9:36 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hi all again:
I'm now subscribed to gluster-users as well, so I should get any replies from that side too.
At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces in xfs_log_worker or other gluster processes, as well as hung task timeout messages.
As to the profile suggesting ovirt1 had poorer performance than ovirt2, I don't have an explanation. gluster volume info engine on both hosts are identical. The computers and drives are identical (Dell R610 with PERC 6/i controller configured to pass through the drive). ovirt1 and ovirt2's partiion scheme/map do vary somewhat, but I figured that wouldn't be a big deal and gluster just uses the least common denominator. Is that accurate?
In case it was missed, here are the dmesg errors being thrown by gluster (as far as I can tell). They definately started after I upgraded from gluster 3.8 to 3.12:
[ 6604.536156] perf: interrupt took too long (9593 > 9565), lowering kernel.perf_event_max_sample_rate to 20000 [ 8280.188734] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8280.188787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.188837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189067] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8280.189100] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189105] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8280.189109] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189117] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8280.189121] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189161] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8280.189207] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189256] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8280.189260] Call Trace: [ 8280.189265] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189300] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8280.189360] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8280.189404] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189453] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8280.189457] Call Trace: [ 8280.189476] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8280.189480] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189484] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8280.189494] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189597] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8280.189600] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189604] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189608] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186024] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8400.186077] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186127] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8400.186133] Call Trace: [ 8400.186146] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8400.186153] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186221] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8400.186227] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186260] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8400.186292] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.186324] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186356] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8400.186388] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186394] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.186398] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186405] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.186409] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186450] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8400.186496] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186545] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8400.186549] Call Trace: [ 8400.186554] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186589] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8400.186593] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186622] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8400.186629] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186634] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8400.186637] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186641] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186645] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186649] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8400.186693] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186741] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8400.186746] Call Trace: [ 8400.186764] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8400.186768] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186772] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.186782] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8400.186786] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8400.186790] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8400.186795] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186800] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8400.186833] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8400.186862] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8400.186879] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186884] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8400.186887] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186891] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186895] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186912] INFO: task kworker/3:4:3191 blocked for more than 120 seconds. [ 8400.186957] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.187006] kworker/3:4 D ffff931faf598fd0 0 3191 2 0x00000080 [ 8400.187041] Workqueue: xfs-sync/dm-10 xfs_log_worker [xfs] [ 8400.187043] Call Trace: [ 8400.187050] [<ffffffff9614e45f>] ? delayacct_end+0x8f/0xb0 [ 8400.187055] [<ffffffff960dbbc8>] ? update_group_power+0x28/0x280 [ 8400.187059] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.187062] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.187066] [<ffffffff9671432d>] wait_for_completion+0xfd/0x140 [ 8400.187071] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.187075] [<ffffffff960b418d>] flush_work+0xfd/0x190 [ 8400.187079] [<ffffffff960b06e0>] ? move_linked_works+0x90/0x90 [ 8400.187110] [<ffffffffc04a1d8a>] xlog_cil_force_lsn+0x8a/0x210 [xfs] [ 8400.187142] [<ffffffffc049fcf5>] _xfs_log_force+0x85/0x2c0 [xfs] [ 8400.187174] [<ffffffffc049ffd6>] ? xfs_log_worker+0x36/0x100 [xfs] [ 8400.187206] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs] [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440 [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0 [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0x2a0 [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x21/0x21 [ 8400.187265] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [13493.643624] perf: interrupt took too long (12026 > 11991), lowering kernel.perf_event_max_sample_rate to 16000
On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim@palousetech.com> wrote:
Hi:
Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume:
Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
gluster> volume info engine
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on
On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay <kdhananj@redhat.com
wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x9 0 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I think this is the profile information for one of the volumes that > lives on the SSDs and is fully operational with no down/problem disks: > > [root@ovirt2 yum.repos.d]# gluster volume profile data info > Brick: ovirt2.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 1113 > 302 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 88608 > 53526 > No. of Writes: 522 812340 > 76257 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 54351 241901 > 15024 > No. of Writes: 21636 8656 > 8976 > > Block Size: 131072b+ > No. of Reads: 524156 > No. of Writes: 296071 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 4189 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1257 > RELEASEDIR > 0.00 46.19 us 12.00 us 187.00 us 69 > FLUSH > 0.00 147.00 us 78.00 us 367.00 us 86 > REMOVEXATTR > 0.00 223.46 us 24.00 us 1166.00 us 149 > READDIR > 0.00 565.34 us 76.00 us 3639.00 us 88 > FTRUNCATE > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 98.84 us 2.00 us 880.00 us 1198 > OPENDIR > 0.00 91.59 us 26.00 us 10371.00 us 3853 > STATFS > 0.00 494.14 us 17.00 us 193439.00 us 1171 > GETXATTR > 0.00 299.42 us 35.00 us 9799.00 us 2044 > READDIRP > 0.00 1965.31 us 110.00 us 382258.00 us 321 > XATTROP > 0.01 113.40 us 24.00 us 61061.00 us 8134 > STAT > 0.01 755.38 us 57.00 us 607603.00 us 3196 > DISCARD > 0.05 2690.09 us 58.00 us 2704761.00 us 3206 > OPEN > 0.10 119978.25 us 97.00 us 9406684.00 us 154 > SETATTR > 0.18 101.73 us 28.00 us 700477.00 us 313379 > FSTAT > 0.23 1059.84 us 25.00 us 2716124.00 us 38255 > LOOKUP > 0.47 1024.11 us 54.00 us 6197164.00 us 81455 > FXATTROP > 1.72 2984.00 us 15.00 us 37098954.00 us > 103020 FINODELK > 5.92 44315.32 us 51.00 us 24731536.00 us > 23957 FSYNC > 13.27 2399.78 us 25.00 us 22089540.00 us > 991005 READ > 37.00 5980.43 us 52.00 us 22099889.00 us > 1108976 WRITE > 41.04 5452.75 us 13.00 us 22102452.00 us > 1349053 INODELK > > Duration: 10026 seconds > Data Read: 80046027759 bytes > Data Written: 44496632320 bytes > > Interval 1 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 838 > 185 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 85856 > 51575 > No. of Writes: 382 705802 > 57812 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 52673 232093 > 14984 > No. of Writes: 13499 4908 > 4242 > > Block Size: 131072b+ > No. of Reads: 460040 > No. of Writes: 6411 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 53.38 us 26.00 us 111.00 us 16 > FLUSH > 0.00 145.14 us 78.00 us 367.00 us 71 > REMOVEXATTR > 0.00 190.96 us 114.00 us 298.00 us 71 > SETATTR > 0.00 213.38 us 24.00 us 1145.00 us 90 > READDIR > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 101.76 us 2.00 us 880.00 us 1093 > OPENDIR > 0.01 93.60 us 27.00 us 10371.00 us 3090 > STATFS > 0.02 537.47 us 17.00 us 193439.00 us 1038 > GETXATTR > 0.03 297.44 us 35.00 us 9799.00 us 1990 > READDIRP > 0.03 2357.28 us 110.00 us 382258.00 us 253 > XATTROP > 0.04 385.93 us 58.00 us 47593.00 us 2091 > OPEN > 0.04 114.86 us 24.00 us 61061.00 us 7715 > STAT > 0.06 444.59 us 57.00 us 333240.00 us 3053 > DISCARD > 0.42 316.24 us 25.00 us 290728.00 us 29823 > LOOKUP > 0.73 257.92 us 54.00 us 344812.00 us 63296 > FXATTROP > 1.37 98.30 us 28.00 us 67621.00 us 313172 > FSTAT > 1.58 2124.69 us 51.00 us 849200.00 us 16717 > FSYNC > 5.73 162.46 us 52.00 us 748492.00 us 794079 > WRITE > 7.19 2065.17 us 16.00 us 37098954.00 us > 78381 FINODELK > 36.44 886.32 us 25.00 us 2216436.00 us 925421 > READ > 46.30 1178.04 us 13.00 us 1700704.00 us 884635 > INODELK > > Duration: 7485 seconds > Data Read: 71250527215 bytes > Data Written: 5119903744 bytes > > Brick: ovirt3.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 3264419 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 90 > FORGET > 0.00 0.00 us 0.00 us 0.00 us 9462 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 4254 > RELEASEDIR > 0.00 50.52 us 13.00 us 190.00 us 71 > FLUSH > 0.00 186.97 us 87.00 us 713.00 us 86 > REMOVEXATTR > 0.00 79.32 us 33.00 us 189.00 us 228 > LK > 0.00 220.98 us 129.00 us 513.00 us 86 > SETATTR > 0.01 259.30 us 26.00 us 2632.00 us 137 > READDIR > 0.02 322.76 us 145.00 us 2125.00 us 321 > XATTROP > 0.03 109.55 us 2.00 us 1258.00 us 1193 > OPENDIR > 0.05 70.21 us 21.00 us 431.00 us 3196 > DISCARD > 0.05 169.26 us 21.00 us 2315.00 us 1545 > GETXATTR > 0.12 176.85 us 63.00 us 2844.00 us 3206 > OPEN > 0.61 303.49 us 90.00 us 3085.00 us 9633 > FSTAT > 2.44 305.66 us 28.00 us 3716.00 us 38230 > LOOKUP > 4.52 266.22 us 55.00 us 53424.00 us 81455 > FXATTROP > 6.96 1397.99 us 51.00 us 64822.00 us 23889 > FSYNC > 16.48 84.74 us 25.00 us 6917.00 us 932592 > WRITE > 30.16 106.90 us 13.00 us 3920189.00 us 1353046 > INODELK > 38.55 1794.52 us 14.00 us 16210553.00 us > 103039 FINODELK > > Duration: 66562 seconds > Data Read: 0 bytes > Data Written: 3264419 bytes > > Interval 1 Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 794080 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 70.31 us 26.00 us 125.00 us 16 > FLUSH > 0.00 193.10 us 103.00 us 713.00 us 71 > REMOVEXATTR > 0.01 227.32 us 133.00 us 513.00 us 71 > SETATTR > 0.01 79.32 us 33.00 us 189.00 us 228 > LK > 0.01 259.83 us 35.00 us 1138.00 us 89 > READDIR > 0.03 318.26 us 145.00 us 2047.00 us 253 > XATTROP > 0.04 112.67 us 3.00 us 1258.00 us 1093 > OPENDIR > 0.06 167.98 us 23.00 us 1951.00 us 1014 > GETXATTR > 0.08 70.97 us 22.00 us 431.00 us 3053 > DISCARD > 0.13 183.78 us 66.00 us 2844.00 us 2091 > OPEN > 1.01 303.82 us 90.00 us 3085.00 us 9610 > FSTAT > 3.27 316.59 us 30.00 us 3716.00 us 29820 > LOOKUP > 5.83 265.79 us 59.00 us 53424.00 us 63296 > FXATTROP > 7.95 1373.89 us 51.00 us 64822.00 us 16717 > FSYNC > 23.17 851.99 us 14.00 us 16210553.00 us > 78555 FINODELK > 24.04 87.44 us 27.00 us 6917.00 us 794081 > WRITE > 34.36 111.91 us 14.00 us 984871.00 us 886790 > INODELK > > Duration: 7485 seconds > Data Read: 0 bytes > Data Written: 794080 bytes > > > ----------------------- > Here is the data from the volume that is backed by the SHDDs and has > one failed disk: > [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info > Brick: 172.172.1.12:/gluster/brick3/data-hdd > -------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.74 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2331.74 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.75 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.19 us 51.00 us 3595513.00 us 131642 > WRITE > 17.70 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.55 2546.43 us 26.00 us 5077347.00 us 786060 > READ > 31.56 49699.15 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > Interval 0 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.73 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2334.57 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.49 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.32 us 51.00 us 3595513.00 us 131642 > WRITE > 17.71 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.56 2546.42 us 26.00 us 5077347.00 us 786060 > READ > 31.54 49651.63 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > > On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Thank you for your response. >> >> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >> replica 3, with a brick on all three ovirt machines. >> >> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >> IO failures, and that brick is offline. However, the other two replicas >> are fully operational (although they still show contents in the heal info >> command that won't go away, but that may be the case until I replace the >> failed disk). >> >> What is bothering me is that ALL 4 gluster volumes are showing >> horrible performance issues. At this point, as the bad disk has been >> completely offlined, I would expect gluster to perform at normal speed, but >> that is definitely not the case. >> >> I've also noticed that the performance hits seem to come in waves: >> things seem to work acceptably (but slow) for a while, then suddenly, its >> as if all disk IO on all volumes (including non-gluster local OS disk >> volumes for the hosts) pause for about 30 seconds, then IO resumes again. >> During those times, I start getting VM not responding and host not >> responding notices as well as the applications having major issues. >> >> I've shut down most of my VMs and am down to just my essential core >> VMs (shedded about 75% of my VMs). I still am experiencing the same issues. >> >> Am I correct in believing that once the failed disk was brought >> offline that performance should return to normal? >> >> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >> wrote: >> >>> I would check disks status and accessibility of mount points where >>> your gluster volumes reside. >>> >>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> On one ovirt server, I'm now seeing these messages: >>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943424 >>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943536 >>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> >>>> >>>> >>>> >>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>>> 4.2): >>>>> >>>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> (appears a lot). >>>>> >>>>> I also found on the ssh session of that, some sysv warnings >>>>> about the backing disk for one of the gluster volumes (straight replica >>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>> my understanding that it should continue to work with the other two >>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>> much faster. >>>>> >>>>> Gluster generates a bunch of different log files, I don't know >>>>> what ones you want, or from which machine(s). >>>>> >>>>> How do I do "volume profiling"? >>>>> >>>>> Thanks! >>>>> >>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> Do you see errors reported in the mount logs for the volume? If >>>>>> so, could you attach the logs? >>>>>> Any issues with your underlying disks. Can you also attach >>>>>> output of volume profiling? >>>>>> >>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>> well. >>>>>>> >>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>> >>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>> origional post). Is all this storage related? >>>>>>> >>>>>>> I also have two different gluster volumes for VM storage, and >>>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>>> time and same way. >>>>>>> >>>>>>> --Jim >>>>>>> >>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>> sabose@redhat.com> wrote: >>>>>>> >>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>> jim@palousetech.com> wrote: >>>>>>>> >>>>>>>>> Hello: >>>>>>>>> >>>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>>> except for a gluster sync issue that showed up. >>>>>>>>> >>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>> >>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> --------- >>>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>>> same files that need healing. >>>>>>>>> >>>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>>> entries. It shows 0 for split brain. >>>>>>>>> >>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>> >>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>> >>>>>>>>> (the process and PID are often different) >>>>>>>>> >>>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> --Jim >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>> vacy-policy/ >>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>> y/about/community-guidelines/ >>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>>> IRQF/ >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/27P5WCXLKQ6T3UPN7BGXJWHRJWA2E54S/

I did eventually get ovirt1 to come up. I lost track of the specific combination of things, but I know the last command was "reinstall host" from the gui. I had tried that several times prior and it failed, but this time it worked. As I was running on almost no sleep, I don't remember the final things that allowed the reinstall to work. It did take my gluster volumes down again in the process, though. Once I got ovirt1 back up and operation and got my gluster volumes online again, then the gui recognize my storage domains and brought the data center up, and then I could start doing things in the gui again (like starting VMs). I will perform more in depth analysis and troubleshooting of ovirt1's low level storage performance today. According to everything I've checked so far, I don't see anything wrong with it, though. On the gluster side, I am using LVM and LV_Thin for my gluster backing volumes...Is that expected to be a problem? --Jim On Thu, May 31, 2018 at 11:25 PM, Sahina Bose <sabose@redhat.com> wrote:
On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt.
In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although somehow, I got machines from the data domain to start), and there's no obvious way I've seen to tell them to bring it online. It claims none of the servers can see that volume, but I've quadruple-checked that the volumes are mounted on the engines and are fully functional there. I have some more VMs I need to get back up and running. How do I fix this?
ovirt1, for unknown reasons, will not work. Attempts to bring it online fail, and I haven't figured out what log file to look in yet for more details.
As Krutika mentioned before, the storage performance issues seems to stem from the slow brick on ovirt1 - ovirt1.nwfiber.com:/gluster/brick1/engine
Have you checked this?
When you say ovirt1 will not work - do you mean the host cannot be activated in oVirt engine? Can you look at/share the engine.log (on the HE VM) to understand why it won't. Also if the storage domain is not accessible by any of the servers, check/share the vdsm.log to understand the error.
On Wed, May 30, 2018 at 9:36 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hi all again:
I'm now subscribed to gluster-users as well, so I should get any replies from that side too.
At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces in xfs_log_worker or other gluster processes, as well as hung task timeout messages.
As to the profile suggesting ovirt1 had poorer performance than ovirt2, I don't have an explanation. gluster volume info engine on both hosts are identical. The computers and drives are identical (Dell R610 with PERC 6/i controller configured to pass through the drive). ovirt1 and ovirt2's partiion scheme/map do vary somewhat, but I figured that wouldn't be a big deal and gluster just uses the least common denominator. Is that accurate?
In case it was missed, here are the dmesg errors being thrown by gluster (as far as I can tell). They definately started after I upgraded from gluster 3.8 to 3.12:
[ 6604.536156] perf: interrupt took too long (9593 > 9565), lowering kernel.perf_event_max_sample_rate to 20000 [ 8280.188734] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8280.188787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.188837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189067] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8280.189100] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8280.189105] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8280.189109] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189117] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x2 1/0x21 [ 8280.189121] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8280.189161] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8280.189207] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189256] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8280.189260] Call Trace: [ 8280.189265] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189300] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8280.189305] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189333] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8280.189340] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189345] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8280.189348] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189352] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189356] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8280.189360] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8280.189404] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8280.189453] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8280.189457] Call Trace: [ 8280.189476] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8280.189480] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.189484] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8280.189494] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8280.189498] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8280.189502] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8280.189507] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.189513] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8280.189546] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8280.189588] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8280.189593] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8280.189597] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8280.189600] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8280.189604] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8280.189608] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186024] INFO: task xfsaild/dm-10:1061 blocked for more than 120 seconds. [ 8400.186077] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186127] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8400.186133] Call Trace: [ 8400.186146] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8400.186153] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186221] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8400.186227] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186260] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8400.186292] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.186324] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186356] [<ffffffffc04abfec>] xfsaild+0x16c/0x6f0 [xfs] [ 8400.186388] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [ 8400.186394] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.186398] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186405] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x2 1/0x21 [ 8400.186409] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.186450] INFO: task glusterclogfsyn:6421 blocked for more than 120 seconds. [ 8400.186496] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186545] glusterclogfsyn D ffff93203a2edee0 0 6421 1 0x00000080 [ 8400.186549] Call Trace: [ 8400.186554] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186589] [<ffffffffc04a0388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [ 8400.186593] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186622] [<ffffffffc0480b97>] xfs_file_fsync+0x107/0x1e0 [xfs] [ 8400.186629] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186634] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8400.186637] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186641] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186645] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186649] INFO: task glusteriotwr2:766 blocked for more than 120 seconds. [ 8400.186693] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.186741] glusteriotwr2 D ffff931cfa510000 0 766 1 0x00000080 [ 8400.186746] Call Trace: [ 8400.186764] [<ffffffffc01a3839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [ 8400.186768] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.186772] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.186782] [<ffffffffc01a3d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [ 8400.186786] [<ffffffff9671348d>] io_schedule_timeout+0xad/0x130 [ 8400.186790] [<ffffffff967145ad>] wait_for_completion_io+0xfd/0x140 [ 8400.186795] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.186800] [<ffffffff9631e574>] blkdev_issue_flush+0xb4/0x110 [ 8400.186833] [<ffffffffc04984b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [ 8400.186862] [<ffffffffc0480c40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [ 8400.186879] [<ffffffff9624f0e7>] do_fsync+0x67/0xb0 [ 8400.186884] [<ffffffff9672076f>] ? system_call_after_swapgs+0xbc/ 0x160 [ 8400.186887] [<ffffffff9624f3d0>] SyS_fsync+0x10/0x20 [ 8400.186891] [<ffffffff9672082f>] system_call_fastpath+0x1c/0x21 [ 8400.186895] [<ffffffff9672077b>] ? system_call_after_swapgs+0xc8/ 0x160 [ 8400.186912] INFO: task kworker/3:4:3191 blocked for more than 120 seconds. [ 8400.186957] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 8400.187006] kworker/3:4 D ffff931faf598fd0 0 3191 2 0x00000080 [ 8400.187041] Workqueue: xfs-sync/dm-10 xfs_log_worker [xfs] [ 8400.187043] Call Trace: [ 8400.187050] [<ffffffff9614e45f>] ? delayacct_end+0x8f/0xb0 [ 8400.187055] [<ffffffff960dbbc8>] ? update_group_power+0x28/0x280 [ 8400.187059] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8400.187062] [<ffffffff967118e9>] schedule_timeout+0x239/0x2c0 [ 8400.187066] [<ffffffff9671432d>] wait_for_completion+0xfd/0x140 [ 8400.187071] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8400.187075] [<ffffffff960b418d>] flush_work+0xfd/0x190 [ 8400.187079] [<ffffffff960b06e0>] ? move_linked_works+0x90/0x90 [ 8400.187110] [<ffffffffc04a1d8a>] xlog_cil_force_lsn+0x8a/0x210 [xfs] [ 8400.187142] [<ffffffffc049fcf5>] _xfs_log_force+0x85/0x2c0 [xfs] [ 8400.187174] [<ffffffffc049ffd6>] ? xfs_log_worker+0x36/0x100 [xfs] [ 8400.187206] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8400.187237] [<ffffffffc049ffd6>] xfs_log_worker+0x36/0x100 [xfs] [ 8400.187241] [<ffffffff960b312f>] process_one_work+0x17f/0x440 [ 8400.187245] [<ffffffff960b3df6>] worker_thread+0x126/0x3c0 [ 8400.187249] [<ffffffff960b3cd0>] ? manage_workers.isra.24+0x2a0/0 x2a0 [ 8400.187253] [<ffffffff960bb161>] kthread+0xd1/0xe0 [ 8400.187257] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [ 8400.187261] [<ffffffff96720677>] ret_from_fork_nospec_begin+0x2 1/0x21 [ 8400.187265] [<ffffffff960bb090>] ? insert_kthread_work+0x40/0x40 [13493.643624] perf: interrupt took too long (12026 > 11991), lowering kernel.perf_event_max_sample_rate to 16000
On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj@redhat.com> wrote:
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim@palousetech.com> wrote:
Hi:
Thank you. I was finally able to get my cluster minimally functional again (its 2am local here; had to be up by 6am). I set the cluster.eager-lock off, but I'm still seeing performance issues. Here's the engine volume:
Brick: ovirt1.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.88 3765.77 us 42.00 us 1341978.00 us 4581 LOOKUP 15.06 19624.04 us 26.00 us 6412511.00 us 14971 FINODELK 21.63 59817.26 us 45.00 us 4008832.00 us 7054 FSYNC 23.19 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.10 125578.64 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 4096b+ No. of Reads: 702 24 772 No. of Writes: 0 939 4802
Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 0 51 49 No. of Writes: 4211 858 291
Block Size: 65536b+ 131072b+ No. of Reads: 49 5232 No. of Writes: 333 588 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1227 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1035 RELEASEDIR 0.00 827.00 us 619.00 us 1199.00 us 10 READDIRP 0.00 98.38 us 30.00 us 535.00 us 144 ENTRYLK 0.00 180.11 us 121.00 us 257.00 us 94 REMOVEXATTR 0.00 208.03 us 23.00 us 1980.00 us 182 GETXATTR 0.00 1212.71 us 35.00 us 29459.00 us 38 READDIR 0.01 260.39 us 172.00 us 552.00 us 376 XATTROP 0.01 305.10 us 45.00 us 144983.00 us 727 STATFS 0.03 1267.23 us 22.00 us 418399.00 us 451 FLUSH 0.04 1164.14 us 86.00 us 437638.00 us 600 OPEN 0.07 2253.75 us 3.00 us 645037.00 us 598 OPENDIR 0.08 17360.97 us 149.00 us 962156.00 us 94 SETATTR 0.12 2733.36 us 50.00 us 1683945.00 us 877 FSTAT 0.21 3041.55 us 29.00 us 1021732.00 us 1368 INODELK 0.24 389.93 us 107.00 us 550203.00 us 11840 FXATTROP 0.34 14820.49 us 38.00 us 1935527.00 us 444 STAT 0.91 3859.75 us 42.00 us 1341978.00 us 4581 LOOKUP 15.05 19622.80 us 26.00 us 6412511.00 us 14971 FINODELK 21.62 59809.04 us 45.00 us 4008832.00 us 7054 FSYNC 23.18 263941.14 us 86.00 us 4130892.00 us 1714 READ 38.09 125562.74 us 207.00 us 7682338.00 us 5919 WRITE
Duration: 6957 seconds Data Read: 694832857 bytes Data Written: 192751104 bytes
Brick: ovirt2.nwfiber.com:/gluster/brick1/engine ------------------------------------------------ Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.48 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.61 2063.11 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 704 167 216 No. of Writes: 0 939 0
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 424 34918 9866 No. of Writes: 0 4802 4211
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 12695 3290 3880 No. of Writes: 858 291 333
Block Size: 131072b+ No. of Reads: 13718 No. of Writes: 596 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 3 FORGET 0.00 0.00 us 0.00 us 0.00 us 1228 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1041 RELEASEDIR 0.06 74.26 us 21.00 us 915.00 us 144 ENTRYLK 0.07 141.97 us 87.00 us 268.00 us 94 REMOVEXATTR 0.08 370.68 us 22.00 us 1311.00 us 38 READDIR 0.09 184.88 us 108.00 us 316.00 us 94 SETATTR 0.14 56.41 us 21.00 us 540.00 us 451 FLUSH 0.14 139.42 us 18.00 us 3251.00 us 184 GETXATTR 0.31 96.06 us 2.00 us 2293.00 us 598 OPENDIR 0.35 88.76 us 31.00 us 716.00 us 727 STATFS 0.43 131.96 us 59.00 us 784.00 us 600 OPEN 0.44 214.77 us 110.00 us 851.00 us 376 XATTROP 0.44 81.02 us 21.00 us 3029.00 us 998 STAT 0.51 68.57 us 17.00 us 3039.00 us 1368 INODELK 1.26 429.06 us 274.00 us 988.00 us 539 READDIRP 1.89 109.51 us 38.00 us 2492.00 us 3175 FSTAT 5.09 62.48 us 19.00 us 4578.00 us 14967 FINODELK 5.72 229.43 us 32.00 us 979.00 us 4581 LOOKUP 11.48 356.40 us 144.00 us 16222.00 us 5917 WRITE 12.40 192.55 us 87.00 us 4372.00 us 11834 FXATTROP 12.48 325.11 us 33.00 us 10702.00 us 7051 FSYNC 46.62 2063.91 us 131.00 us 193833.00 us 4151 READ
Duration: 6958 seconds Data Read: 2782908225 bytes Data Written: 193799680 bytes
gluster> volume info engine
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on
On Tue, May 29, 2018 at 9:17 PM, Krutika Dhananjay < kdhananj@redhat.com> wrote:
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit:
commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Wed Jun 21 16:35:46 2017 +0200
scsi: virtio_scsi: let host do exception handling
virtio_scsi tries to do exception handling after the default 30 seconds timeout expires. However, it's better to let the host control the timeout, otherwise with a heavy I/O load it is likely that an abort will also timeout. This leads to fatal errors like filesystems going offline.
Disable the 'sd' timeout and allow the host to do exception handling, following the precedent of the storvsc driver.
Hannes has a proposal to introduce timeouts in virtio, but this provides an immediate solution for stable kernels too.
[mkp: fixed typo]
Reported-by: Douglas Miller <dougmill@linux.vnet.ibm.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: linux-scsi@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Adding Paolo/Kevin to comment.
As for the poor gluster performance, could you disable cluster.eager-lock and see if that makes any difference:
# gluster volume set <VOL> cluster.eager-lock off
Do also capture the volume profile again if you still see performance issues after disabling eager-lock.
-Krutika
On Wed, May 30, 2018 at 6:55 AM, Jim Kusznir <jim@palousetech.com> wrote:
> I also finally found the following in my system log on one server: > > [10679.524491] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10679.527150] Call Trace: > [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10679.527283] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [10679.529961] Call Trace: > [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than > 120 seconds. > [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 > 0x00000080 > [10679.533738] Call Trace: > [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.512757] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10919.516677] Call Trace: > [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 > [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 > [xfs] > [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 > [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] > [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 > [xfs] > [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 > [xfs] > [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] > [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 > [xfs] > [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 > [xfs] > [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] > [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 > [xfs] > [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 > [xfs] > [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 > [xfs] > [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] > [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] > [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] > [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 > [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 > [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 > [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 > 3/0x30 > [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 > [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 > [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than > 120 seconds. > [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 > 0x00000080 > [11159.498984] Call Trace: > [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11159.499056] [<ffffffffc05dd9b7>] ? > xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] > [11159.499082] [<ffffffffc05dd43e>] ? > xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] > [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 > [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 > [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 > [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 > [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 > [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 > [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 > [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x > 55/0xc0 > [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 > [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 > [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 > [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 > /0xe0 > [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 > [xfs] > [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 > [xfs] > [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than > 120 seconds. > [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 > 0x00000000 > [11279.491671] Call Trace: > [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x9 > 0 > [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] > [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] > [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] > [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 > [xfs] > [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] > [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 > [xfs] > [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 > [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 > 1/0x21 > [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more > than 120 seconds. > [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 > 0x00000080 > [11279.494957] Call Trace: > [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 > [dm_mod] > [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x > 140 > [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.495105] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [11279.498118] Call Trace: > [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 > [dm_mod] > [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x > 140 > [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than > 120 seconds. > [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 > 0x00000080 > [11279.501348] Call Trace: > [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 > [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 > [xfs] > [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than > 120 seconds. > [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 > 0x00000080 > [11279.504635] Call Trace: > [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [12127.466494] perf: interrupt took too long (8263 > 8150), lowering > kernel.perf_event_max_sample_rate to 24000 > > -------------------- > I think this is the cause of the massive ovirt performance issues > irrespective of gluster volume. At the time this happened, I was also > ssh'ed into the host, and was doing some rpm querry commands. I had just > run rpm -qa |grep glusterfs (to verify what version was actually > installed), and that command took almost 2 minutes to return! Normally it > takes less than 2 seconds. That is all pure local SSD IO, too.... > > I'm no expert, but its my understanding that anytime a software > causes these kinds of issues, its a serious bug in the software, even if > its mis-handled exceptions. Is this correct? > > --Jim > > On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I think this is the profile information for one of the volumes that >> lives on the SSDs and is fully operational with no down/problem disks: >> >> [root@ovirt2 yum.repos.d]# gluster volume profile data info >> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 1113 >> 302 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 88608 >> 53526 >> No. of Writes: 522 812340 >> 76257 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 54351 241901 >> 15024 >> No. of Writes: 21636 8656 >> 8976 >> >> Block Size: 131072b+ >> No. of Reads: 524156 >> No. of Writes: 296071 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 4189 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1257 RELEASEDIR >> 0.00 46.19 us 12.00 us 187.00 us >> 69 FLUSH >> 0.00 147.00 us 78.00 us 367.00 us 86 >> REMOVEXATTR >> 0.00 223.46 us 24.00 us 1166.00 us >> 149 READDIR >> 0.00 565.34 us 76.00 us 3639.00 us >> 88 FTRUNCATE >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 98.84 us 2.00 us 880.00 us >> 1198 OPENDIR >> 0.00 91.59 us 26.00 us 10371.00 us >> 3853 STATFS >> 0.00 494.14 us 17.00 us 193439.00 us >> 1171 GETXATTR >> 0.00 299.42 us 35.00 us 9799.00 us >> 2044 READDIRP >> 0.00 1965.31 us 110.00 us 382258.00 us >> 321 XATTROP >> 0.01 113.40 us 24.00 us 61061.00 us >> 8134 STAT >> 0.01 755.38 us 57.00 us 607603.00 us >> 3196 DISCARD >> 0.05 2690.09 us 58.00 us 2704761.00 us >> 3206 OPEN >> 0.10 119978.25 us 97.00 us 9406684.00 us >> 154 SETATTR >> 0.18 101.73 us 28.00 us 700477.00 us >> 313379 FSTAT >> 0.23 1059.84 us 25.00 us 2716124.00 us >> 38255 LOOKUP >> 0.47 1024.11 us 54.00 us 6197164.00 us >> 81455 FXATTROP >> 1.72 2984.00 us 15.00 us 37098954.00 us >> 103020 FINODELK >> 5.92 44315.32 us 51.00 us 24731536.00 us >> 23957 FSYNC >> 13.27 2399.78 us 25.00 us 22089540.00 us >> 991005 READ >> 37.00 5980.43 us 52.00 us 22099889.00 us >> 1108976 WRITE >> 41.04 5452.75 us 13.00 us 22102452.00 us >> 1349053 INODELK >> >> Duration: 10026 seconds >> Data Read: 80046027759 bytes >> Data Written: 44496632320 bytes >> >> Interval 1 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 838 >> 185 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 85856 >> 51575 >> No. of Writes: 382 705802 >> 57812 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 52673 232093 >> 14984 >> No. of Writes: 13499 4908 >> 4242 >> >> Block Size: 131072b+ >> No. of Reads: 460040 >> No. of Writes: 6411 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 53.38 us 26.00 us 111.00 us >> 16 FLUSH >> 0.00 145.14 us 78.00 us 367.00 us 71 >> REMOVEXATTR >> 0.00 190.96 us 114.00 us 298.00 us >> 71 SETATTR >> 0.00 213.38 us 24.00 us 1145.00 us >> 90 READDIR >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 101.76 us 2.00 us 880.00 us >> 1093 OPENDIR >> 0.01 93.60 us 27.00 us 10371.00 us >> 3090 STATFS >> 0.02 537.47 us 17.00 us 193439.00 us >> 1038 GETXATTR >> 0.03 297.44 us 35.00 us 9799.00 us >> 1990 READDIRP >> 0.03 2357.28 us 110.00 us 382258.00 us >> 253 XATTROP >> 0.04 385.93 us 58.00 us 47593.00 us >> 2091 OPEN >> 0.04 114.86 us 24.00 us 61061.00 us >> 7715 STAT >> 0.06 444.59 us 57.00 us 333240.00 us >> 3053 DISCARD >> 0.42 316.24 us 25.00 us 290728.00 us >> 29823 LOOKUP >> 0.73 257.92 us 54.00 us 344812.00 us >> 63296 FXATTROP >> 1.37 98.30 us 28.00 us 67621.00 us >> 313172 FSTAT >> 1.58 2124.69 us 51.00 us 849200.00 us >> 16717 FSYNC >> 5.73 162.46 us 52.00 us 748492.00 us >> 794079 WRITE >> 7.19 2065.17 us 16.00 us 37098954.00 us >> 78381 FINODELK >> 36.44 886.32 us 25.00 us 2216436.00 us >> 925421 READ >> 46.30 1178.04 us 13.00 us 1700704.00 us >> 884635 INODELK >> >> Duration: 7485 seconds >> Data Read: 71250527215 bytes >> Data Written: 5119903744 bytes >> >> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 3264419 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 90 FORGET >> 0.00 0.00 us 0.00 us 0.00 us >> 9462 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 4254 RELEASEDIR >> 0.00 50.52 us 13.00 us 190.00 us >> 71 FLUSH >> 0.00 186.97 us 87.00 us 713.00 us 86 >> REMOVEXATTR >> 0.00 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.00 220.98 us 129.00 us 513.00 us >> 86 SETATTR >> 0.01 259.30 us 26.00 us 2632.00 us >> 137 READDIR >> 0.02 322.76 us 145.00 us 2125.00 us >> 321 XATTROP >> 0.03 109.55 us 2.00 us 1258.00 us >> 1193 OPENDIR >> 0.05 70.21 us 21.00 us 431.00 us >> 3196 DISCARD >> 0.05 169.26 us 21.00 us 2315.00 us >> 1545 GETXATTR >> 0.12 176.85 us 63.00 us 2844.00 us >> 3206 OPEN >> 0.61 303.49 us 90.00 us 3085.00 us >> 9633 FSTAT >> 2.44 305.66 us 28.00 us 3716.00 us >> 38230 LOOKUP >> 4.52 266.22 us 55.00 us 53424.00 us >> 81455 FXATTROP >> 6.96 1397.99 us 51.00 us 64822.00 us >> 23889 FSYNC >> 16.48 84.74 us 25.00 us 6917.00 us >> 932592 WRITE >> 30.16 106.90 us 13.00 us 3920189.00 us >> 1353046 INODELK >> 38.55 1794.52 us 14.00 us 16210553.00 us >> 103039 FINODELK >> >> Duration: 66562 seconds >> Data Read: 0 bytes >> Data Written: 3264419 bytes >> >> Interval 1 Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 794080 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 70.31 us 26.00 us 125.00 us >> 16 FLUSH >> 0.00 193.10 us 103.00 us 713.00 us 71 >> REMOVEXATTR >> 0.01 227.32 us 133.00 us 513.00 us >> 71 SETATTR >> 0.01 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.01 259.83 us 35.00 us 1138.00 us >> 89 READDIR >> 0.03 318.26 us 145.00 us 2047.00 us >> 253 XATTROP >> 0.04 112.67 us 3.00 us 1258.00 us >> 1093 OPENDIR >> 0.06 167.98 us 23.00 us 1951.00 us >> 1014 GETXATTR >> 0.08 70.97 us 22.00 us 431.00 us >> 3053 DISCARD >> 0.13 183.78 us 66.00 us 2844.00 us >> 2091 OPEN >> 1.01 303.82 us 90.00 us 3085.00 us >> 9610 FSTAT >> 3.27 316.59 us 30.00 us 3716.00 us >> 29820 LOOKUP >> 5.83 265.79 us 59.00 us 53424.00 us >> 63296 FXATTROP >> 7.95 1373.89 us 51.00 us 64822.00 us >> 16717 FSYNC >> 23.17 851.99 us 14.00 us 16210553.00 us >> 78555 FINODELK >> 24.04 87.44 us 27.00 us 6917.00 us >> 794081 WRITE >> 34.36 111.91 us 14.00 us 984871.00 us >> 886790 INODELK >> >> Duration: 7485 seconds >> Data Read: 0 bytes >> Data Written: 794080 bytes >> >> >> ----------------------- >> Here is the data from the volume that is backed by the SHDDs and >> has one failed disk: >> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >> Brick: 172.172.1.12:/gluster/brick3/data-hdd >> -------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.74 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2331.74 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.75 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.19 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.70 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.55 2546.43 us 26.00 us 5077347.00 us >> 786060 READ >> 31.56 49699.15 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> Interval 0 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.73 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2334.57 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.49 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.32 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.71 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.56 2546.42 us 26.00 us 5077347.00 us >> 786060 READ >> 31.54 49651.63 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> >> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Thank you for your response. >>> >>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >>> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >>> replica 3, with a brick on all three ovirt machines. >>> >>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >>> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >>> IO failures, and that brick is offline. However, the other two replicas >>> are fully operational (although they still show contents in the heal info >>> command that won't go away, but that may be the case until I replace the >>> failed disk). >>> >>> What is bothering me is that ALL 4 gluster volumes are showing >>> horrible performance issues. At this point, as the bad disk has been >>> completely offlined, I would expect gluster to perform at normal speed, but >>> that is definitely not the case. >>> >>> I've also noticed that the performance hits seem to come in waves: >>> things seem to work acceptably (but slow) for a while, then suddenly, its >>> as if all disk IO on all volumes (including non-gluster local OS disk >>> volumes for the hosts) pause for about 30 seconds, then IO resumes again. >>> During those times, I start getting VM not responding and host not >>> responding notices as well as the applications having major issues. >>> >>> I've shut down most of my VMs and am down to just my essential >>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>> issues. >>> >>> Am I correct in believing that once the failed disk was brought >>> offline that performance should return to normal? >>> >>> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >>> wrote: >>> >>>> I would check disks status and accessibility of mount points >>>> where your gluster volumes reside. >>>> >>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>> wrote: >>>> >>>>> On one ovirt server, I'm now seeing these messages: >>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945472 >>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945584 >>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>>> 2048 >>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905943424 >>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905943536 >>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945472 >>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945584 >>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>>> 2048 >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded >>>>>> to 4.2): >>>>>> >>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> (appears a lot). >>>>>> >>>>>> I also found on the ssh session of that, some sysv warnings >>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>> my understanding that it should continue to work with the other two >>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>> much faster. >>>>>> >>>>>> Gluster generates a bunch of different log files, I don't know >>>>>> what ones you want, or from which machine(s). >>>>>> >>>>>> How do I do "volume profiling"? >>>>>> >>>>>> Thanks! >>>>>> >>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>> sabose@redhat.com> wrote: >>>>>> >>>>>>> Do you see errors reported in the mount logs for the volume? >>>>>>> If so, could you attach the logs? >>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>> output of volume profiling? >>>>>>> >>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>>> well. >>>>>>>> >>>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>>> >>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>> origional post). Is all this storage related? >>>>>>>> >>>>>>>> I also have two different gluster volumes for VM storage, and >>>>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>>>> time and same way. >>>>>>>> >>>>>>>> --Jim >>>>>>>> >>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>> sabose@redhat.com> wrote: >>>>>>>> >>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>> >>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>> >>>>>>>>>> Hello: >>>>>>>>>> >>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>> >>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>>> >>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> --------- >>>>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>>>> same files that need healing. >>>>>>>>>> >>>>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>>>> entries. It shows 0 for split brain. >>>>>>>>>> >>>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>> >>>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>> >>>>>>>>>> (the process and PID are often different) >>>>>>>>>> >>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> --Jim >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>> vacy-policy/ >>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>> y/about/community-guidelines/ >>>>> List Archives: https://lists.ovirt.org/archiv >>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>> S5MF/ >>>>> >>>> >>> >> > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/communit > y/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archiv > es/list/users@ovirt.org/message/3DEQQLJM3WHQNZJ7KEMRZVFZ52MTIL74/ > >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/27P5WCXLKQ6T3UPN7BGXJWHRJWA2E54S/

Well, things went from bad to very, very bad.... It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed. I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up. I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source). I'll see if my --deploy works, and if not, I'll be back with another message/help request. When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2. --Jim On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+ 0x23/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
> Do you see errors reported in the mount logs for the volume? If so, > could you attach the logs? > Any issues with your underlying disks. Can you also attach output of > volume profiling? > > On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Ok, things have gotten MUCH worse this morning. I'm getting random >> errors from VMs, right now, about a third of my VMs have been paused due to >> storage issues, and most of the remaining VMs are not performing well. >> >> At this point, I am in full EMERGENCY mode, as my production >> services are now impacted, and I'm getting calls coming in with problems... >> >> I'd greatly appreciate help...VMs are running VERY slowly (when >> they run), and they are steadily getting worse. I don't know why. I was >> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >> minutes at a time (while the VM became unresponsive and any VMs I was >> logged into that were linux were giving me the CPU stuck messages in my >> origional post). Is all this storage related? >> >> I also have two different gluster volumes for VM storage, and only >> one had the issues, but now VMs in both are being affected at the same time >> and same way. >> >> --Jim >> >> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> [Adding gluster-users to look at the heal issue] >>> >>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> Hello: >>>> >>>> I've been having some cluster and gluster performance issues >>>> lately. I also found that my cluster was out of date, and was trying to >>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>> except for a gluster sync issue that showed up. >>>> >>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>> reason one of my gluster volumes would not sync. Here's sample output: >>>> >>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>> Status: Connected >>>> Number of entries: 8 >>>> >>>> --------- >>>> Its been in this state for a couple days now, and bandwidth >>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>> commanding a full heal from all three clusters in the node. Its always the >>>> same files that need healing. >>>> >>>> When running gluster volume heal data-hdd statistics, I see >>>> sometimes different information, but always some number of "heal failed" >>>> entries. It shows 0 for split brain. >>>> >>>> I'm not quite sure what to do. I suspect it may be due to nodes >>>> 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to >>>> upgrade and reboot them until I have a good gluster sync (don't need to >>>> create a split brain issue). How do I proceed with this? >>>> >>>> Second issue: I've been experiencing VERY POOR performance on >>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>> stuck for 22s! [mongod:14766] >>>> >>>> (the process and PID are often different) >>>> >>>> I'm not quite sure what to do about this either. My initial >>>> thought was upgrad everything to current and see if its still there, but I >>>> cannot move forward with that until my gluster is healed... >>>> >>>> Thanks! >>>> --Jim >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>> >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/

hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running. hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it). How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently.... --Jim On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
> I see in messages on ovirt3 (my 3rd machine, the one upgraded to > 4.2): > > May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: > database connection failed (No such file or directory) > (appears a lot). > > I also found on the ssh session of that, some sysv warnings about > the backing disk for one of the gluster volumes (straight replica 3). The > glusterfs process for that disk on that machine went offline. Its my > understanding that it should continue to work with the other two machines > while I attempt to replace that disk, right? Attempted writes (touching an > empty file) can take 15 seconds, repeating it later will be much faster. > > Gluster generates a bunch of different log files, I don't know what > ones you want, or from which machine(s). > > How do I do "volume profiling"? > > Thanks! > > On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> > wrote: > >> Do you see errors reported in the mount logs for the volume? If so, >> could you attach the logs? >> Any issues with your underlying disks. Can you also attach output >> of volume profiling? >> >> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Ok, things have gotten MUCH worse this morning. I'm getting >>> random errors from VMs, right now, about a third of my VMs have been paused >>> due to storage issues, and most of the remaining VMs are not performing >>> well. >>> >>> At this point, I am in full EMERGENCY mode, as my production >>> services are now impacted, and I'm getting calls coming in with problems... >>> >>> I'd greatly appreciate help...VMs are running VERY slowly (when >>> they run), and they are steadily getting worse. I don't know why. I was >>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>> minutes at a time (while the VM became unresponsive and any VMs I was >>> logged into that were linux were giving me the CPU stuck messages in my >>> origional post). Is all this storage related? >>> >>> I also have two different gluster volumes for VM storage, and only >>> one had the issues, but now VMs in both are being affected at the same time >>> and same way. >>> >>> --Jim >>> >>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >>> wrote: >>> >>>> [Adding gluster-users to look at the heal issue] >>>> >>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> Hello: >>>>> >>>>> I've been having some cluster and gluster performance issues >>>>> lately. I also found that my cluster was out of date, and was trying to >>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>> except for a gluster sync issue that showed up. >>>>> >>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>> >>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>> Status: Connected >>>>> Number of entries: 8 >>>>> >>>>> --------- >>>>> Its been in this state for a couple days now, and bandwidth >>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>> commanding a full heal from all three clusters in the node. Its always the >>>>> same files that need healing. >>>>> >>>>> When running gluster volume heal data-hdd statistics, I see >>>>> sometimes different information, but always some number of "heal failed" >>>>> entries. It shows 0 for split brain. >>>>> >>>>> I'm not quite sure what to do. I suspect it may be due to nodes >>>>> 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to >>>>> upgrade and reboot them until I have a good gluster sync (don't need to >>>>> create a split brain issue). How do I proceed with this? >>>>> >>>>> Second issue: I've been experiencing VERY POOR performance on >>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>>> stuck for 22s! [mongod:14766] >>>>> >>>>> (the process and PID are often different) >>>>> >>>>> I'm not quite sure what to do about this either. My initial >>>>> thought was upgrad everything to current and see if its still there, but I >>>>> cannot move forward with that until my gluster is healed... >>>>> >>>>> Thanks! >>>>> --Jim >>>>> >>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>> y/about/community-guidelines/ >>>>> List Archives: https://lists.ovirt.org/archiv >>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>> IRQF/ >>>>> >>>>> >>>> >>> >> > _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/

On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote:
> On one ovirt server, I'm now seeing these messages: > [56474.239725] blk_update_request: 63 callbacks suppressed > [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 > [56474.240602] blk_update_request: I/O error, dev dm-2, sector > 3905945472 > [56474.241346] blk_update_request: I/O error, dev dm-2, sector > 3905945584 > [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 > [56474.243072] blk_update_request: I/O error, dev dm-2, sector > 3905943424 > [56474.243997] blk_update_request: I/O error, dev dm-2, sector > 3905943536 > [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 > [56474.248315] blk_update_request: I/O error, dev dm-2, sector > 3905945472 > [56474.249231] blk_update_request: I/O error, dev dm-2, sector > 3905945584 > [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 > > > > > On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >> 4.2): >> >> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >> database connection failed (No such file or directory) >> (appears a lot). >> >> I also found on the ssh session of that, some sysv warnings about >> the backing disk for one of the gluster volumes (straight replica 3). The >> glusterfs process for that disk on that machine went offline. Its my >> understanding that it should continue to work with the other two machines >> while I attempt to replace that disk, right? Attempted writes (touching an >> empty file) can take 15 seconds, repeating it later will be much faster. >> >> Gluster generates a bunch of different log files, I don't know what >> ones you want, or from which machine(s). >> >> How do I do "volume profiling"? >> >> Thanks! >> >> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> Do you see errors reported in the mount logs for the volume? If >>> so, could you attach the logs? >>> Any issues with your underlying disks. Can you also attach output >>> of volume profiling? >>> >>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>> random errors from VMs, right now, about a third of my VMs have been paused >>>> due to storage issues, and most of the remaining VMs are not performing >>>> well. >>>> >>>> At this point, I am in full EMERGENCY mode, as my production >>>> services are now impacted, and I'm getting calls coming in with problems... >>>> >>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>> they run), and they are steadily getting worse. I don't know why. I was >>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>> logged into that were linux were giving me the CPU stuck messages in my >>>> origional post). Is all this storage related? >>>> >>>> I also have two different gluster volumes for VM storage, and >>>> only one had the issues, but now VMs in both are being affected at the same >>>> time and same way. >>>> >>>> --Jim >>>> >>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> >>>> wrote: >>>> >>>>> [Adding gluster-users to look at the heal issue] >>>>> >>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Hello: >>>>>> >>>>>> I've been having some cluster and gluster performance issues >>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>> except for a gluster sync issue that showed up. >>>>>> >>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>>> >>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>> Status: Connected >>>>>> Number of entries: 8 >>>>>> >>>>>> --------- >>>>>> Its been in this state for a couple days now, and bandwidth >>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>> same files that need healing. >>>>>> >>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>> sometimes different information, but always some number of "heal failed" >>>>>> entries. It shows 0 for split brain. >>>>>> >>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>> need to create a split brain issue). How do I proceed with this? >>>>>> >>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 >>>>>> stuck for 22s! [mongod:14766] >>>>>> >>>>>> (the process and PID are often different) >>>>>> >>>>>> I'm not quite sure what to do about this either. My initial >>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>> cannot move forward with that until my gluster is healed... >>>>>> >>>>>> Thanks! >>>>>> --Jim >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list -- users@ovirt.org >>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>> y/about/community-guidelines/ >>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>> IRQF/ >>>>>> >>>>>> >>>>> >>>> >>> >> > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/communit > y/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archiv > es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet. [root@ovirt2 images]# gluster volume info Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on [root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771 Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771 Task Status of Volume data-hdd ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771 Task Status of Volume iso ------------------------------------------------------------------------------ There are no active volume tasks On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
> I would check disks status and accessibility of mount points where > your gluster volumes reside. > > On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote: > >> On one ovirt server, I'm now seeing these messages: >> [56474.239725] blk_update_request: 63 callbacks suppressed >> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >> 3905943424 >> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >> 3905943536 >> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >> >> >> >> >> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>> 4.2): >>> >>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> (appears a lot). >>> >>> I also found on the ssh session of that, some sysv warnings about >>> the backing disk for one of the gluster volumes (straight replica 3). The >>> glusterfs process for that disk on that machine went offline. Its my >>> understanding that it should continue to work with the other two machines >>> while I attempt to replace that disk, right? Attempted writes (touching an >>> empty file) can take 15 seconds, repeating it later will be much faster. >>> >>> Gluster generates a bunch of different log files, I don't know >>> what ones you want, or from which machine(s). >>> >>> How do I do "volume profiling"? >>> >>> Thanks! >>> >>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>> wrote: >>> >>>> Do you see errors reported in the mount logs for the volume? If >>>> so, could you attach the logs? >>>> Any issues with your underlying disks. Can you also attach output >>>> of volume profiling? >>>> >>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>> due to storage issues, and most of the remaining VMs are not performing >>>>> well. >>>>> >>>>> At this point, I am in full EMERGENCY mode, as my production >>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>> >>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>> origional post). Is all this storage related? >>>>> >>>>> I also have two different gluster volumes for VM storage, and >>>>> only one had the issues, but now VMs in both are being affected at the same >>>>> time and same way. >>>>> >>>>> --Jim >>>>> >>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> [Adding gluster-users to look at the heal issue] >>>>>> >>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Hello: >>>>>>> >>>>>>> I've been having some cluster and gluster performance issues >>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>> except for a gluster sync issue that showed up. >>>>>>> >>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>>>> >>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> --------- >>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>> same files that need healing. >>>>>>> >>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>> entries. It shows 0 for split brain. >>>>>>> >>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>> >>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>> >>>>>>> (the process and PID are often different) >>>>>>> >>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>> cannot move forward with that until my gluster is healed... >>>>>>> >>>>>>> Thanks! >>>>>>> --Jim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>> y/about/community-guidelines/ >>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>> IRQF/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

I've just gotten back to where hsoted-engine --vm-status does (eventually) return some output other than the ha-agent or storage is down error. It does currently take up to two minutes, and the last run returned stale data. Gluster-wise, gluster volume heal <volume> info shows the three volumes with all bricks are all showing 0 entries (and return quickly). The 4th volume missing one brick (but still has two replicas) does return entries, and is currently taking a very long time to come back with them. Making progress toward getting it online again...Don't know why I get stale data in hosted-engine --vm-status or how to overcome that... --Jim On Tue, May 29, 2018 at 11:38 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Thank you for your response. > > I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica > bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is > replica 3, with a brick on all three ovirt machines. > > The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD > (same in all three machines). On ovirt3, the SSHD has reported hard IO > failures, and that brick is offline. However, the other two replicas are > fully operational (although they still show contents in the heal info > command that won't go away, but that may be the case until I replace the > failed disk). > > What is bothering me is that ALL 4 gluster volumes are showing > horrible performance issues. At this point, as the bad disk has been > completely offlined, I would expect gluster to perform at normal speed, but > that is definitely not the case. > > I've also noticed that the performance hits seem to come in waves: > things seem to work acceptably (but slow) for a while, then suddenly, its > as if all disk IO on all volumes (including non-gluster local OS disk > volumes for the hosts) pause for about 30 seconds, then IO resumes again. > During those times, I start getting VM not responding and host not > responding notices as well as the applications having major issues. > > I've shut down most of my VMs and am down to just my essential core > VMs (shedded about 75% of my VMs). I still am experiencing the same issues. > > Am I correct in believing that once the failed disk was brought > offline that performance should return to normal? > > On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> > wrote: > >> I would check disks status and accessibility of mount points where >> your gluster volumes reside. >> >> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> On one ovirt server, I'm now seeing these messages: >>> [56474.239725] blk_update_request: 63 callbacks suppressed >>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>> 3905943424 >>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>> 3905943536 >>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >>> >>> >>> >>> >>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>> 4.2): >>>> >>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> (appears a lot). >>>> >>>> I also found on the ssh session of that, some sysv warnings about >>>> the backing disk for one of the gluster volumes (straight replica 3). The >>>> glusterfs process for that disk on that machine went offline. Its my >>>> understanding that it should continue to work with the other two machines >>>> while I attempt to replace that disk, right? Attempted writes (touching an >>>> empty file) can take 15 seconds, repeating it later will be much faster. >>>> >>>> Gluster generates a bunch of different log files, I don't know >>>> what ones you want, or from which machine(s). >>>> >>>> How do I do "volume profiling"? >>>> >>>> Thanks! >>>> >>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>>> wrote: >>>> >>>>> Do you see errors reported in the mount logs for the volume? If >>>>> so, could you attach the logs? >>>>> Any issues with your underlying disks. Can you also attach >>>>> output of volume profiling? >>>>> >>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>> well. >>>>>> >>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>> >>>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>> origional post). Is all this storage related? >>>>>> >>>>>> I also have two different gluster volumes for VM storage, and >>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>> time and same way. >>>>>> >>>>>> --Jim >>>>>> >>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>> sabose@redhat.com> wrote: >>>>>> >>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>> >>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Hello: >>>>>>>> >>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>> except for a gluster sync issue that showed up. >>>>>>>> >>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>> >>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> --------- >>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>> same files that need healing. >>>>>>>> >>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>> entries. It shows 0 for split brain. >>>>>>>> >>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>> >>>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>> >>>>>>>> (the process and PID are often different) >>>>>>>> >>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>> >>>>>>>> Thanks! >>>>>>>> --Jim >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>> y/about/community-guidelines/ >>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>> IRQF/ >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

So, I appear to be at the point where gluster appears to be functional, but engine still won't start. Here's from agent.log on one machine after giving the hosted-engine --vm-start command: MainThread::INFO::2018-05-29 23:52:09,953::hosted_engine::948::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-29 23:52:10,230::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent MainThread::INFO::2018-05-29 23:56:16,061::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-29 23:56:16,075::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is unexpectedly down. MainThread::INFO::2018-05-29 23:56:16,076::state_decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineMaybeAway'> MainThread::INFO::2018-05-29 23:56:16,359::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineMaybeAway) sent? sent MainThread::INFO::2018-05-29 23:56:16,554::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 --------------- here's an excerpt from vdsm.log: 2018-05-30 00:00:23,114-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.02 seconds (__init__:573) 2018-05-30 00:00:23,115-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') VM wrapper has started (vm:2764) 2018-05-30 00:00:23,125-0700 INFO (vm/50392390) [vdsm.api] START getVolumeSize(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', volUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', options=None) from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:46) 2018-05-30 00:00:23,130-0700 INFO (vm/50392390) [vdsm.api] FINISH getVolumeSize return={'truesize': '11298571776', 'apparentsize': '10737418240'} from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:52) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vds] prepared volume path: (clientIF:497) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vdsm.api] START prepareImage(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', leafUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', allowIllegal=False) from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:46) 2018-05-30 00:00:23,155-0700 INFO (vm/50392390) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (fileSD:622) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a' (fileSD:576) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a mode: None (fileUtils:197) 2018-05-30 00:00:23,157-0700 INFO (vm/50392390) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 to /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 (fileSD:579) 2018-05-30 00:00:23,228-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:23,418-0700 INFO (vm/50392390) [vdsm.api] FINISH prepareImage return={'info': {'path': u'engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.8.11'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt1.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt2.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt3.nwfiber.com'}], 'protocol': 'gluster'}, 'path': u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'imgVolumesInfo': [{'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'}]} from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:52) 2018-05-30 00:00:23,419-0700 INFO (vm/50392390) [vds] prepared volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (clientIF:497) 2018-05-30 00:00:23,420-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Enabling drive monitoring (drivemonitor:54) 2018-05-30 00:00:23,434-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'hdc' path: 'file=' -> '*file=' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'vda' path: 'file=/rhev/data-center/00000000-0000-0000-0000-000000000000/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' -> u'*file=/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Using drive lease: {'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'} (lease:327) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f path: 'LEASE-PATH:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease' (lease:316) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f offset: 'LEASE-OFFSET:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> 0 (lease:316) 2018-05-30 00:00:23,580-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=Failed to get Open VSwitch system-id . err = ['ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)'] (hooks:110) 2018-05-30 00:00:23,582-0700 INFO (jsonrpc/0) [api.host] FINISH getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info': {u'HBAInventory': {u'iSCSI': [{u'InitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171'}], u'FC': []}, u'packages2': {u'kernel': {u'release': u'862.3.2.el7.x86_64', u'version': u'3.10.0'}, u'glusterfs-rdma': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-fuse': {u'release': u'1.el7', u'version': u'3.12.9'}, u'spice-server': {u'release': u'2.el7_5.3', u'version': u'0.14.0'}, u'librbd1': {u'release': u'2.el7', u'version': u'0.94.5'}, u'vdsm': {u'release': u'1.el7.centos', u'version': u'4.20.27.1'}, u'qemu-kvm': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'openvswitch': {u'release': u'3.el7', u'version': u'2.9.0'}, u'libvirt': {u'release': u'14.el7_5.5', u'version': u'3.9.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7.centos', u'version': u'2.2.11'}, u'qemu-img': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'mom': {u'release': u'1.el7.centos', u'version': u'0.5.12'}, u'glusterfs': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-server': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-geo-replication': {u'release': u'1.el7', u'version': u'3.12.9'}}, u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel': u'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', u'nestedVirtualization': False, u'liveMerge': u'true', u'hooks': {u'before_vm_start': {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'}, u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}, u'50_vfio_mdev': {u'md5': u'b77eaad7b8f5ca3825dca1b38b0dd446'}}, u'after_network_setup': {u'30_ethtool_options': {u'md5': u'f04c2ca5dce40663e2ed69806eea917c'}}, u'after_vm_destroy': {u'50_vhostmd': {u'md5': u'd70f7ee0453f632e87c09a157ff8ff66'}, u'50_vfio_mdev': {u'md5': u'35364c6e08ecc58410e31af6c73b17f6'}}, u'after_vm_start': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}}, u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_network_setup': {u'50_fcoe': {u'md5': u'28c352339c8beef1e1b05c67d106d062'}}, u'after_get_caps': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0e00c63ab44a952e722209ea31fd7a71'}, u'ovirt_provider_ovn_hook': {u'md5': u'257990644ee6c7824b522e2ab3077ae7'}}, u'after_nic_hotplug': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_migrate_destination': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'after_device_create': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_dehibernate': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'before_nic_hotplug': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}, u'before_device_migrate_destination': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}}, u'before_device_create': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}}, u'supportsIPv6': True, u'realtimeKernel': False, u'vmTypes': [u'kvm'], u'liveSnapshot': u'true', u'cpuThreads': u'16', u'kdumpStatus': 0, u'networks': {u'ovirtmgmt': {u'iface': u'ovirtmgmt', u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'ports': [u'em1']}, u'Public_Fiber': {u'iface': u'Public_Fiber', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'ports': [u'em1.3']}, u'Gluster': {u'iface': u'em4', u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': False, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u'172.172.1.12/24'], u'interface': u'em4', u'ipv6gateway': u'::', u'gateway': u''}}, u'kernelArgs': u'BOOT_IMAGE=/vmlinuz-3.10.0-862.3.2.el7.x86_64 root=/dev/mapper/centos_ovirt-root ro crashkernel=auto rd.lvm.lv=centos_ovirt/root rd.lvm.lv=centos_ovirt/swap rhgb quiet LANG=en_US.UTF-8', u'bridges': {u'Public_Fiber': {u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'25509', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1.3']}, u'ovirtmgmt': {u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'13221', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1']}}, u'uuid': u'4C4C4544-005A-4710-8034-B2C04F4C4B31', u'onlineCpus': u'0,2,4,6,8,10,12,14,1,3,5,7,9,11,13,15', u'nameservers': [u'192.168.8.1', u'8.8.8.8'], u'nics': {u'em4': {u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u'172.172.1.12/24'], u'hwaddr': u'00:21:9b:90:07:16', u'ipv6gateway': u'::', u'gateway': u''}, u'em1': {u'ipv6autoconf': False, u'addr': u'', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:10', u'ipv6gateway': u'::', u'gateway': u''}, u'em3': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:14', u'ipv6gateway': u'::', u'gateway': u''}, u'em2': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:12', u'ipv6gateway': u'::', u'gateway': u''}}, u'software_revision': u'1', u'hostdevPassthrough': u'false', u'clusterLevels': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,dtherm,ida,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', u'kernelFeatures': {u'RETP': 1, u'IBRS': 0, u'PTI': 1}, u'ISCSIInitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171', u'netConfigDirty': u'False', u'selinux': {u'mode': u'1'}, u'autoNumaBalancing': 1, u'reservedMem': u'321', u'containers': False, u'bondings': {}, u'software_version': u'4.20', u'supportedENGINEs': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuSpeed': u'2527.017', u'numaNodes': {u'1': {u'totalMemory': u'16371', u'cpus': [1, 3, 5, 7, 9, 11, 13, 15]}, u'0': {u'totalMemory': u'16384', u'cpus': [0, 2, 4, 6, 8, 10, 12, 14]}}, u'cpuSockets': u'2', u'vlans': {u'em1.3': {u'iface': u'em1', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'vlanid': 3, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u''}}, u'version_name': u'Snow Man', 'lastClientIface': 'lo', u'cpuCores': u'8', u'hostedEngineDeployed': True, u'hugepages': [2048], u'guestOverhead': u'65', u'additionalFeatures': [u'libgfapi_supported', u'GLUSTER_SNAPSHOT', u'GLUSTER_GEO_REPLICATION', u'GLUSTER_BRICK_MANAGEMENT'], u'kvmEnabled': u'true', u'memSize': u'31996', u'emulatedMachines': [u'pc-i440fx-rhel7.1.0', u'pc-q35-rhel7.3.0', u'rhel6.3.0', u'pc-i440fx-rhel7.5.0', u'pc-i440fx-rhel7.0.0', u'rhel6.1.0', u'pc-i440fx-rhel7.4.0', u'rhel6.6.0', u'pc-q35-rhel7.5.0', u'rhel6.2.0', u'pc', u'pc-i440fx-rhel7.3.0', u'q35', u'pc-i440fx-rhel7.2.0', u'rhel6.4.0', u'pc-q35-rhel7.4.0', u'rhel6.0.0', u'rhel6.5.0'], u'rngSources': [u'hwrng', u'random'], u'operatingSystem': {u'release': u'5.1804.el7.centos.2', u'pretty_name': u'CentOS Linux 7 (Core)', u'version': u'7', u'name': u'RHEL'}}} from=::1,35660 (api:52) 2018-05-30 00:00:23,593-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 3.26 seconds (__init__:573) 2018-05-30 00:00:23,724-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_openstacknet: rc=0 err= (hooks:110) 2018-05-30 00:00:23,919-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_vmfex: rc=0 err= (hooks:110) 2018-05-30 00:00:24,213-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:24,392-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook: rc=0 err= (hooks:110) 2018-05-30 00:00:24,401-0700 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,33766 (api:46) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33766 (api:52) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-05-30 00:00:24,623-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err= (hooks:110) 2018-05-30 00:00:24,836-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110) 2018-05-30 00:00:25,022-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110) 2018-05-30 00:00:25,023-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') <?xml version="1.0" encoding="utf-8"?><domain type="kvm" xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <name>HostedEngine</name> <uuid>50392390-4f89-435c-bf3b-8254b58a4ef7</uuid> <memory>8388608</memory> <currentMemory>8388608</currentMemory> <maxMemory slots="16">33554432</maxMemory> <vcpu current="4">16</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">7-5.1804.el7.centos.2</entry> <entry name="serial">4C4C4544-005A-4710-8034-B2C04F4C4B31</entry> <entry name="uuid">50392390-4f89-435c-bf3b-8254b58a4ef7</entry> </system> </sysinfo> <clock adjustment="0" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="16" threads="1"/> <numa> <cell cpus="0,1,2,3" id="0" memory="8388608"/> </numa> </cpu> <cputune/> <devices> <input bus="usb" type="tablet"/> <channel type="unix"> <target name="ovirt-guest-agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.qemu.guest_agent.0"/> </channel> <controller index="0" ports="16" type="virtio-serial"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> </controller> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"> <listen network="vdsm-ovirtmgmt" type="network"/> </graphics> <controller type="usb"> <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/> </controller> <controller type="ide"> <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/> </controller> <video> <model heads="1" type="cirrus" vram="16384"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/> </video> <memballoon model="none"/> <disk device="cdrom" snapshot="no" type="file"> <driver error_policy="report" name="qemu" type="raw"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <address bus="1" controller="0" target="0" type="drive" unit="0"/> </disk> <disk device="disk" snapshot="no" type="file"> <target bus="virtio" dev="vda"/> <source file="/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f"/> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/> <alias name="ua-3de00d8c-d8b0-4ae0-9363-38a504f5d2b2"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <serial>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</serial> </disk> <lease> <key>d0fcd8de-6105-4f33-a674-727e3a11e89f</key> <lockspace>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</lockspace> <target offset="0" path="/rhev/data-center/mnt/glusterSD/192.168.8.11: _engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease"/> </lease> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="ovirtmgmt"/> <alias name="ua-9f93c126-cbb3-4c5b-909e-41a53c2e31fb"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <mac address="00:16:3e:2d:bd:b1"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <channel type="unix"><target name="org.ovirt.hosted-engine-setup.0" type="virtio"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.ovirt.hosted-engine-setup.0"/></channel></devices> <pm> <suspend-to-disk enabled="no"/> <suspend-to-mem enabled="no"/> </pm> <os> <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type> <smbios mode="sysinfo"/> </os> <metadata> <ns0:qos/> <ovirt-vm:vm> <minGuaranteedMemoryMb type="int">8192</minGuaranteedMemoryMb> <clusterVersion>4.2</clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="00:16:3e:2d:bd:b1"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:volumeID>d0fcd8de-6105-4f33-a674-727e3a11e89f</ovirt-vm:volumeID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:imageID>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</ovirt-vm:imageID> <ovirt-vm:domainID>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</ovirt-vm:domainID> </ovirt-vm:device> <launchPaused>false</launchPaused> <resumeBehavior>auto_resume</resumeBehavior> </ovirt-vm:vm> </metadata> <on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reboot><on_crash>destroy</on_crash></domain> (vm:2867) 2018-05-30 00:00:25,252-0700 INFO (monitor/c0acdef) [storage.SANLock] Acquiring host id for domain c0acdefb-7d16-48ec-9d76-659b8fe33e2a (id=2, async=True) (clusterlock:284) 2018-05-30 00:00:26,808-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 00:00:26,809-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683) ------------- The "failed to acquire lock: no space left on device" error has me conserned...Yet there is no device that is full on any servers (espciially the one I'm on): Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_ovirt-root 8.0G 6.6G 1.5G 82% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 4.0K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/gluster-data 177G 59G 119G 33% /gluster/brick2 /dev/mapper/gluster-engine 25G 18G 7.9G 69% /gluster/brick1 /dev/mapper/vg_thin-iso 25G 7.5G 18G 30% /gluster/brick4 /dev/mapper/vg_thin-data--hdd 1.2T 308G 892G 26% /gluster/brick3 /dev/sda1 497M 313M 185M 63% /boot 192.168.8.11:/engine 25G 20G 5.5G 79% /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine tmpfs 3.2G 0 3.2G 0% /run/user/0 192.168.8.11:/data 136G 60G 77G 44% /rhev/data-center/mnt/glusterSD/192.168.8.11:_data 192.168.8.11:/iso 25G 7.5G 18G 30% /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso 172.172.1.11:/data-hdd 1.2T 308G 892G 26% /rhev/data-center/mnt/glusterSD/172.172.1.11:_data-hdd -------- --Jim On Tue, May 29, 2018 at 11:53 PM, Jim Kusznir <jim@palousetech.com> wrote:
I've just gotten back to where hsoted-engine --vm-status does (eventually) return some output other than the ha-agent or storage is down error. It does currently take up to two minutes, and the last run returned stale data.
Gluster-wise, gluster volume heal <volume> info shows the three volumes with all bricks are all showing 0 entries (and return quickly). The 4th volume missing one brick (but still has two replicas) does return entries, and is currently taking a very long time to come back with them.
Making progress toward getting it online again...Don't know why I get stale data in hosted-engine --vm-status or how to overcome that...
--Jim
On Tue, May 29, 2018 at 11:38 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x9 0 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I think this is the profile information for one of the volumes that > lives on the SSDs and is fully operational with no down/problem disks: > > [root@ovirt2 yum.repos.d]# gluster volume profile data info > Brick: ovirt2.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 1113 > 302 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 88608 > 53526 > No. of Writes: 522 812340 > 76257 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 54351 241901 > 15024 > No. of Writes: 21636 8656 > 8976 > > Block Size: 131072b+ > No. of Reads: 524156 > No. of Writes: 296071 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 4189 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1257 > RELEASEDIR > 0.00 46.19 us 12.00 us 187.00 us 69 > FLUSH > 0.00 147.00 us 78.00 us 367.00 us 86 > REMOVEXATTR > 0.00 223.46 us 24.00 us 1166.00 us 149 > READDIR > 0.00 565.34 us 76.00 us 3639.00 us 88 > FTRUNCATE > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 98.84 us 2.00 us 880.00 us 1198 > OPENDIR > 0.00 91.59 us 26.00 us 10371.00 us 3853 > STATFS > 0.00 494.14 us 17.00 us 193439.00 us 1171 > GETXATTR > 0.00 299.42 us 35.00 us 9799.00 us 2044 > READDIRP > 0.00 1965.31 us 110.00 us 382258.00 us 321 > XATTROP > 0.01 113.40 us 24.00 us 61061.00 us 8134 > STAT > 0.01 755.38 us 57.00 us 607603.00 us 3196 > DISCARD > 0.05 2690.09 us 58.00 us 2704761.00 us 3206 > OPEN > 0.10 119978.25 us 97.00 us 9406684.00 us 154 > SETATTR > 0.18 101.73 us 28.00 us 700477.00 us 313379 > FSTAT > 0.23 1059.84 us 25.00 us 2716124.00 us 38255 > LOOKUP > 0.47 1024.11 us 54.00 us 6197164.00 us 81455 > FXATTROP > 1.72 2984.00 us 15.00 us 37098954.00 us > 103020 FINODELK > 5.92 44315.32 us 51.00 us 24731536.00 us > 23957 FSYNC > 13.27 2399.78 us 25.00 us 22089540.00 us > 991005 READ > 37.00 5980.43 us 52.00 us 22099889.00 us > 1108976 WRITE > 41.04 5452.75 us 13.00 us 22102452.00 us > 1349053 INODELK > > Duration: 10026 seconds > Data Read: 80046027759 bytes > Data Written: 44496632320 bytes > > Interval 1 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 838 > 185 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 85856 > 51575 > No. of Writes: 382 705802 > 57812 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 52673 232093 > 14984 > No. of Writes: 13499 4908 > 4242 > > Block Size: 131072b+ > No. of Reads: 460040 > No. of Writes: 6411 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 53.38 us 26.00 us 111.00 us 16 > FLUSH > 0.00 145.14 us 78.00 us 367.00 us 71 > REMOVEXATTR > 0.00 190.96 us 114.00 us 298.00 us 71 > SETATTR > 0.00 213.38 us 24.00 us 1145.00 us 90 > READDIR > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 101.76 us 2.00 us 880.00 us 1093 > OPENDIR > 0.01 93.60 us 27.00 us 10371.00 us 3090 > STATFS > 0.02 537.47 us 17.00 us 193439.00 us 1038 > GETXATTR > 0.03 297.44 us 35.00 us 9799.00 us 1990 > READDIRP > 0.03 2357.28 us 110.00 us 382258.00 us 253 > XATTROP > 0.04 385.93 us 58.00 us 47593.00 us 2091 > OPEN > 0.04 114.86 us 24.00 us 61061.00 us 7715 > STAT > 0.06 444.59 us 57.00 us 333240.00 us 3053 > DISCARD > 0.42 316.24 us 25.00 us 290728.00 us 29823 > LOOKUP > 0.73 257.92 us 54.00 us 344812.00 us 63296 > FXATTROP > 1.37 98.30 us 28.00 us 67621.00 us 313172 > FSTAT > 1.58 2124.69 us 51.00 us 849200.00 us 16717 > FSYNC > 5.73 162.46 us 52.00 us 748492.00 us 794079 > WRITE > 7.19 2065.17 us 16.00 us 37098954.00 us > 78381 FINODELK > 36.44 886.32 us 25.00 us 2216436.00 us 925421 > READ > 46.30 1178.04 us 13.00 us 1700704.00 us 884635 > INODELK > > Duration: 7485 seconds > Data Read: 71250527215 bytes > Data Written: 5119903744 bytes > > Brick: ovirt3.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 3264419 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 90 > FORGET > 0.00 0.00 us 0.00 us 0.00 us 9462 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 4254 > RELEASEDIR > 0.00 50.52 us 13.00 us 190.00 us 71 > FLUSH > 0.00 186.97 us 87.00 us 713.00 us 86 > REMOVEXATTR > 0.00 79.32 us 33.00 us 189.00 us 228 > LK > 0.00 220.98 us 129.00 us 513.00 us 86 > SETATTR > 0.01 259.30 us 26.00 us 2632.00 us 137 > READDIR > 0.02 322.76 us 145.00 us 2125.00 us 321 > XATTROP > 0.03 109.55 us 2.00 us 1258.00 us 1193 > OPENDIR > 0.05 70.21 us 21.00 us 431.00 us 3196 > DISCARD > 0.05 169.26 us 21.00 us 2315.00 us 1545 > GETXATTR > 0.12 176.85 us 63.00 us 2844.00 us 3206 > OPEN > 0.61 303.49 us 90.00 us 3085.00 us 9633 > FSTAT > 2.44 305.66 us 28.00 us 3716.00 us 38230 > LOOKUP > 4.52 266.22 us 55.00 us 53424.00 us 81455 > FXATTROP > 6.96 1397.99 us 51.00 us 64822.00 us 23889 > FSYNC > 16.48 84.74 us 25.00 us 6917.00 us 932592 > WRITE > 30.16 106.90 us 13.00 us 3920189.00 us 1353046 > INODELK > 38.55 1794.52 us 14.00 us 16210553.00 us > 103039 FINODELK > > Duration: 66562 seconds > Data Read: 0 bytes > Data Written: 3264419 bytes > > Interval 1 Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 794080 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 70.31 us 26.00 us 125.00 us 16 > FLUSH > 0.00 193.10 us 103.00 us 713.00 us 71 > REMOVEXATTR > 0.01 227.32 us 133.00 us 513.00 us 71 > SETATTR > 0.01 79.32 us 33.00 us 189.00 us 228 > LK > 0.01 259.83 us 35.00 us 1138.00 us 89 > READDIR > 0.03 318.26 us 145.00 us 2047.00 us 253 > XATTROP > 0.04 112.67 us 3.00 us 1258.00 us 1093 > OPENDIR > 0.06 167.98 us 23.00 us 1951.00 us 1014 > GETXATTR > 0.08 70.97 us 22.00 us 431.00 us 3053 > DISCARD > 0.13 183.78 us 66.00 us 2844.00 us 2091 > OPEN > 1.01 303.82 us 90.00 us 3085.00 us 9610 > FSTAT > 3.27 316.59 us 30.00 us 3716.00 us 29820 > LOOKUP > 5.83 265.79 us 59.00 us 53424.00 us 63296 > FXATTROP > 7.95 1373.89 us 51.00 us 64822.00 us 16717 > FSYNC > 23.17 851.99 us 14.00 us 16210553.00 us > 78555 FINODELK > 24.04 87.44 us 27.00 us 6917.00 us 794081 > WRITE > 34.36 111.91 us 14.00 us 984871.00 us 886790 > INODELK > > Duration: 7485 seconds > Data Read: 0 bytes > Data Written: 794080 bytes > > > ----------------------- > Here is the data from the volume that is backed by the SHDDs and has > one failed disk: > [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info > Brick: 172.172.1.12:/gluster/brick3/data-hdd > -------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.74 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2331.74 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.75 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.19 us 51.00 us 3595513.00 us 131642 > WRITE > 17.70 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.55 2546.43 us 26.00 us 5077347.00 us 786060 > READ > 31.56 49699.15 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > Interval 0 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.73 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2334.57 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.49 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.32 us 51.00 us 3595513.00 us 131642 > WRITE > 17.71 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.56 2546.42 us 26.00 us 5077347.00 us 786060 > READ > 31.54 49651.63 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > > On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Thank you for your response. >> >> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >> replica 3, with a brick on all three ovirt machines. >> >> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >> IO failures, and that brick is offline. However, the other two replicas >> are fully operational (although they still show contents in the heal info >> command that won't go away, but that may be the case until I replace the >> failed disk). >> >> What is bothering me is that ALL 4 gluster volumes are showing >> horrible performance issues. At this point, as the bad disk has been >> completely offlined, I would expect gluster to perform at normal speed, but >> that is definitely not the case. >> >> I've also noticed that the performance hits seem to come in waves: >> things seem to work acceptably (but slow) for a while, then suddenly, its >> as if all disk IO on all volumes (including non-gluster local OS disk >> volumes for the hosts) pause for about 30 seconds, then IO resumes again. >> During those times, I start getting VM not responding and host not >> responding notices as well as the applications having major issues. >> >> I've shut down most of my VMs and am down to just my essential core >> VMs (shedded about 75% of my VMs). I still am experiencing the same issues. >> >> Am I correct in believing that once the failed disk was brought >> offline that performance should return to normal? >> >> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >> wrote: >> >>> I would check disks status and accessibility of mount points where >>> your gluster volumes reside. >>> >>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> On one ovirt server, I'm now seeing these messages: >>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943424 >>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943536 >>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> >>>> >>>> >>>> >>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>>> 4.2): >>>>> >>>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> (appears a lot). >>>>> >>>>> I also found on the ssh session of that, some sysv warnings >>>>> about the backing disk for one of the gluster volumes (straight replica >>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>> my understanding that it should continue to work with the other two >>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>> much faster. >>>>> >>>>> Gluster generates a bunch of different log files, I don't know >>>>> what ones you want, or from which machine(s). >>>>> >>>>> How do I do "volume profiling"? >>>>> >>>>> Thanks! >>>>> >>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> Do you see errors reported in the mount logs for the volume? If >>>>>> so, could you attach the logs? >>>>>> Any issues with your underlying disks. Can you also attach >>>>>> output of volume profiling? >>>>>> >>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>> well. >>>>>>> >>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>> >>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>> origional post). Is all this storage related? >>>>>>> >>>>>>> I also have two different gluster volumes for VM storage, and >>>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>>> time and same way. >>>>>>> >>>>>>> --Jim >>>>>>> >>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>> sabose@redhat.com> wrote: >>>>>>> >>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>> jim@palousetech.com> wrote: >>>>>>>> >>>>>>>>> Hello: >>>>>>>>> >>>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>>> except for a gluster sync issue that showed up. >>>>>>>>> >>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>> >>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> --------- >>>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>>> same files that need healing. >>>>>>>>> >>>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>>> entries. It shows 0 for split brain. >>>>>>>>> >>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>> >>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>> >>>>>>>>> (the process and PID are often different) >>>>>>>>> >>>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> --Jim >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>> vacy-policy/ >>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>> y/about/community-guidelines/ >>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>>> IRQF/ >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

as someone with hardware on the way to build a 3 node HCI cluster with only 2 2tb SSDs per node this problem concerns me, I sometimes worry about the complexity glusterfs adds to the mix and how how difficult it can be to manage when things go wrong. Ive been following this thread and feel for you. You must be getting tired, been there done that and don't wish it upon anyone. I hope you get your environment back in working order soon and at some point discover what went wrong to cause this problem. On Wed, May 30, 2018, 8:55 AM Jim Kusznir, <jim@palousetech.com> wrote:
So, I appear to be at the point where gluster appears to be functional, but engine still won't start. Here's from agent.log on one machine after giving the hosted-engine --vm-start command:
MainThread::INFO::2018-05-29 23:52:09,953::hosted_engine::948::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-29 23:52:10,230::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent MainThread::INFO::2018-05-29 23:56:16,061::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-29 23:56:16,075::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is unexpectedly down. MainThread::INFO::2018-05-29 23:56:16,076::state_decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineMaybeAway'> MainThread::INFO::2018-05-29 23:56:16,359::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineMaybeAway) sent? sent MainThread::INFO::2018-05-29 23:56:16,554::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 --------------- here's an excerpt from vdsm.log: 2018-05-30 00:00:23,114-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.02 seconds (__init__:573) 2018-05-30 00:00:23,115-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') VM wrapper has started (vm:2764) 2018-05-30 00:00:23,125-0700 INFO (vm/50392390) [vdsm.api] START getVolumeSize(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', volUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', options=None) from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:46) 2018-05-30 00:00:23,130-0700 INFO (vm/50392390) [vdsm.api] FINISH getVolumeSize return={'truesize': '11298571776', 'apparentsize': '10737418240'} from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:52) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vds] prepared volume path: (clientIF:497) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vdsm.api] START prepareImage(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', leafUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', allowIllegal=False) from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:46) 2018-05-30 00:00:23,155-0700 INFO (vm/50392390) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (fileSD:622) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a' (fileSD:576) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a mode: None (fileUtils:197) 2018-05-30 00:00:23,157-0700 INFO (vm/50392390) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 to /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 (fileSD:579) 2018-05-30 00:00:23,228-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:23,418-0700 INFO (vm/50392390) [vdsm.api] FINISH prepareImage return={'info': {'path': u'engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.8.11'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt1.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt2.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt3.nwfiber.com'}], 'protocol': 'gluster'}, 'path': u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'imgVolumesInfo': [{'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'}]} from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:52) 2018-05-30 00:00:23,419-0700 INFO (vm/50392390) [vds] prepared volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (clientIF:497) 2018-05-30 00:00:23,420-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Enabling drive monitoring (drivemonitor:54) 2018-05-30 00:00:23,434-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'hdc' path: 'file=' -> '*file=' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'vda' path: 'file=/rhev/data-center/00000000-0000-0000-0000-000000000000/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' -> u'*file=/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Using drive lease: {'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'} (lease:327) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f path: 'LEASE-PATH:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease' (lease:316) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f offset: 'LEASE-OFFSET:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> 0 (lease:316) 2018-05-30 00:00:23,580-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=Failed to get Open VSwitch system-id . err = ['ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)'] (hooks:110) 2018-05-30 00:00:23,582-0700 INFO (jsonrpc/0) [api.host] FINISH getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info': {u'HBAInventory': {u'iSCSI': [{u'InitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171'}], u'FC': []}, u'packages2': {u'kernel': {u'release': u'862.3.2.el7.x86_64', u'version': u'3.10.0'}, u'glusterfs-rdma': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-fuse': {u'release': u'1.el7', u'version': u'3.12.9'}, u'spice-server': {u'release': u'2.el7_5.3', u'version': u'0.14.0'}, u'librbd1': {u'release': u'2.el7', u'version': u'0.94.5'}, u'vdsm': {u'release': u'1.el7.centos', u'version': u'4.20.27.1'}, u'qemu-kvm': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'openvswitch': {u'release': u'3.el7', u'version': u'2.9.0'}, u'libvirt': {u'release': u'14.el7_5.5', u'version': u'3.9.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7.centos', u'version': u'2.2.11'}, u'qemu-img': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'mom': {u'release': u'1.el7.centos', u'version': u'0.5.12'}, u'glusterfs': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-server': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-geo-replication': {u'release': u'1.el7', u'version': u'3.12.9'}}, u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel': u'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', u'nestedVirtualization': False, u'liveMerge': u'true', u'hooks': {u'before_vm_start': {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'}, u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}, u'50_vfio_mdev': {u'md5': u'b77eaad7b8f5ca3825dca1b38b0dd446'}}, u'after_network_setup': {u'30_ethtool_options': {u'md5': u'f04c2ca5dce40663e2ed69806eea917c'}}, u'after_vm_destroy': {u'50_vhostmd': {u'md5': u'd70f7ee0453f632e87c09a157ff8ff66'}, u'50_vfio_mdev': {u'md5': u'35364c6e08ecc58410e31af6c73b17f6'}}, u'after_vm_start': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}}, u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_network_setup': {u'50_fcoe': {u'md5': u'28c352339c8beef1e1b05c67d106d062'}}, u'after_get_caps': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0e00c63ab44a952e722209ea31fd7a71'}, u'ovirt_provider_ovn_hook': {u'md5': u'257990644ee6c7824b522e2ab3077ae7'}}, u'after_nic_hotplug': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_migrate_destination': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'after_device_create': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_dehibernate': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'before_nic_hotplug': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}, u'before_device_migrate_destination': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}}, u'before_device_create': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}}, u'supportsIPv6': True, u'realtimeKernel': False, u'vmTypes': [u'kvm'], u'liveSnapshot': u'true', u'cpuThreads': u'16', u'kdumpStatus': 0, u'networks': {u'ovirtmgmt': {u'iface': u'ovirtmgmt', u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u' 192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'ports': [u'em1']}, u'Public_Fiber': {u'iface': u'Public_Fiber', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'ports': [u'em1.3']}, u'Gluster': {u'iface': u'em4', u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': False, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u' 172.172.1.12/24'], u'interface': u'em4', u'ipv6gateway': u'::', u'gateway': u''}}, u'kernelArgs': u'BOOT_IMAGE=/vmlinuz-3.10.0-862.3.2.el7.x86_64 root=/dev/mapper/centos_ovirt-root ro crashkernel=auto rd.lvm.lv=centos_ovirt/root rd.lvm.lv=centos_ovirt/swap rhgb quiet LANG=en_US.UTF-8', u'bridges': {u'Public_Fiber': {u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'25509', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1.3']}, u'ovirtmgmt': {u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'13221', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1']}}, u'uuid': u'4C4C4544-005A-4710-8034-B2C04F4C4B31', u'onlineCpus': u'0,2,4,6,8,10,12,14,1,3,5,7,9,11,13,15', u'nameservers': [u'192.168.8.1', u'8.8.8.8'], u'nics': {u'em4': {u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u'172.172.1.12/24'], u'hwaddr': u'00:21:9b:90:07:16', u'ipv6gateway': u'::', u'gateway': u''}, u'em1': {u'ipv6autoconf': False, u'addr': u'', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:10', u'ipv6gateway': u'::', u'gateway': u''}, u'em3': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:14', u'ipv6gateway': u'::', u'gateway': u''}, u'em2': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:12', u'ipv6gateway': u'::', u'gateway': u''}}, u'software_revision': u'1', u'hostdevPassthrough': u'false', u'clusterLevels': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,dtherm,ida,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', u'kernelFeatures': {u'RETP': 1, u'IBRS': 0, u'PTI': 1}, u'ISCSIInitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171', u'netConfigDirty': u'False', u'selinux': {u'mode': u'1'}, u'autoNumaBalancing': 1, u'reservedMem': u'321', u'containers': False, u'bondings': {}, u'software_version': u'4.20', u'supportedENGINEs': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuSpeed': u'2527.017', u'numaNodes': {u'1': {u'totalMemory': u'16371', u'cpus': [1, 3, 5, 7, 9, 11, 13, 15]}, u'0': {u'totalMemory': u'16384', u'cpus': [0, 2, 4, 6, 8, 10, 12, 14]}}, u'cpuSockets': u'2', u'vlans': {u'em1.3': {u'iface': u'em1', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'vlanid': 3, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u''}}, u'version_name': u'Snow Man', 'lastClientIface': 'lo', u'cpuCores': u'8', u'hostedEngineDeployed': True, u'hugepages': [2048], u'guestOverhead': u'65', u'additionalFeatures': [u'libgfapi_supported', u'GLUSTER_SNAPSHOT', u'GLUSTER_GEO_REPLICATION', u'GLUSTER_BRICK_MANAGEMENT'], u'kvmEnabled': u'true', u'memSize': u'31996', u'emulatedMachines': [u'pc-i440fx-rhel7.1.0', u'pc-q35-rhel7.3.0', u'rhel6.3.0', u'pc-i440fx-rhel7.5.0', u'pc-i440fx-rhel7.0.0', u'rhel6.1.0', u'pc-i440fx-rhel7.4.0', u'rhel6.6.0', u'pc-q35-rhel7.5.0', u'rhel6.2.0', u'pc', u'pc-i440fx-rhel7.3.0', u'q35', u'pc-i440fx-rhel7.2.0', u'rhel6.4.0', u'pc-q35-rhel7.4.0', u'rhel6.0.0', u'rhel6.5.0'], u'rngSources': [u'hwrng', u'random'], u'operatingSystem': {u'release': u'5.1804.el7.centos.2', u'pretty_name': u'CentOS Linux 7 (Core)', u'version': u'7', u'name': u'RHEL'}}} from=::1,35660 (api:52) 2018-05-30 00:00:23,593-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 3.26 seconds (__init__:573) 2018-05-30 00:00:23,724-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_openstacknet: rc=0 err= (hooks:110) 2018-05-30 00:00:23,919-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_vmfex: rc=0 err= (hooks:110) 2018-05-30 00:00:24,213-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:24,392-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook: rc=0 err= (hooks:110) 2018-05-30 00:00:24,401-0700 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,33766 (api:46) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33766 (api:52) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-05-30 00:00:24,623-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err= (hooks:110) 2018-05-30 00:00:24,836-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110) 2018-05-30 00:00:25,022-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110) 2018-05-30 00:00:25,023-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') <?xml version="1.0" encoding="utf-8"?><domain type="kvm" xmlns:ns0=" http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <name>HostedEngine</name> <uuid>50392390-4f89-435c-bf3b-8254b58a4ef7</uuid> <memory>8388608</memory> <currentMemory>8388608</currentMemory> <maxMemory slots="16">33554432</maxMemory> <vcpu current="4">16</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">7-5.1804.el7.centos.2</entry> <entry name="serial">4C4C4544-005A-4710-8034-B2C04F4C4B31</entry> <entry name="uuid">50392390-4f89-435c-bf3b-8254b58a4ef7</entry> </system> </sysinfo> <clock adjustment="0" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="16" threads="1"/> <numa> <cell cpus="0,1,2,3" id="0" memory="8388608"/> </numa> </cpu> <cputune/> <devices> <input bus="usb" type="tablet"/> <channel type="unix"> <target name="ovirt-guest-agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.qemu.guest_agent.0"/> </channel> <controller index="0" ports="16" type="virtio-serial"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> </controller> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"> <listen network="vdsm-ovirtmgmt" type="network"/> </graphics> <controller type="usb"> <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/> </controller> <controller type="ide"> <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/> </controller> <video> <model heads="1" type="cirrus" vram="16384"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/> </video> <memballoon model="none"/> <disk device="cdrom" snapshot="no" type="file"> <driver error_policy="report" name="qemu" type="raw"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <address bus="1" controller="0" target="0" type="drive" unit="0"/> </disk> <disk device="disk" snapshot="no" type="file"> <target bus="virtio" dev="vda"/> <source file="/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f"/> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/> <alias name="ua-3de00d8c-d8b0-4ae0-9363-38a504f5d2b2"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <serial>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</serial> </disk> <lease> <key>d0fcd8de-6105-4f33-a674-727e3a11e89f</key> <lockspace>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</lockspace> <target offset="0" path="/rhev/data-center/mnt/glusterSD/192.168.8.11: _engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease"/> </lease> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="ovirtmgmt"/> <alias name="ua-9f93c126-cbb3-4c5b-909e-41a53c2e31fb"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <mac address="00:16:3e:2d:bd:b1"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <channel type="unix"><target name="org.ovirt.hosted-engine-setup.0" type="virtio"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.ovirt.hosted-engine-setup.0"/></channel></devices> <pm> <suspend-to-disk enabled="no"/> <suspend-to-mem enabled="no"/> </pm> <os> <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type> <smbios mode="sysinfo"/> </os> <metadata> <ns0:qos/> <ovirt-vm:vm> <minGuaranteedMemoryMb type="int">8192</minGuaranteedMemoryMb> <clusterVersion>4.2</clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="00:16:3e:2d:bd:b1"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
<ovirt-vm:volumeID>d0fcd8de-6105-4f33-a674-727e3a11e89f</ovirt-vm:volumeID> <ovirt-vm:shared>exclusive</ovirt-vm:shared>
<ovirt-vm:imageID>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</ovirt-vm:imageID>
<ovirt-vm:domainID>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</ovirt-vm:domainID> </ovirt-vm:device> <launchPaused>false</launchPaused> <resumeBehavior>auto_resume</resumeBehavior> </ovirt-vm:vm> </metadata> <on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reboot><on_crash>destroy</on_crash></domain> (vm:2867) 2018-05-30 00:00:25,252-0700 INFO (monitor/c0acdef) [storage.SANLock] Acquiring host id for domain c0acdefb-7d16-48ec-9d76-659b8fe33e2a (id=2, async=True) (clusterlock:284) 2018-05-30 00:00:26,808-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 00:00:26,809-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
------------- The "failed to acquire lock: no space left on device" error has me conserned...Yet there is no device that is full on any servers (espciially the one I'm on):
Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_ovirt-root 8.0G 6.6G 1.5G 82% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 4.0K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/gluster-data 177G 59G 119G 33% /gluster/brick2 /dev/mapper/gluster-engine 25G 18G 7.9G 69% /gluster/brick1 /dev/mapper/vg_thin-iso 25G 7.5G 18G 30% /gluster/brick4 /dev/mapper/vg_thin-data--hdd 1.2T 308G 892G 26% /gluster/brick3 /dev/sda1 497M 313M 185M 63% /boot 192.168.8.11:/engine 25G 20G 5.5G 79% /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine tmpfs 3.2G 0 3.2G 0% /run/user/0 192.168.8.11:/data 136G 60G 77G 44% /rhev/data-center/mnt/glusterSD/192.168.8.11:_data 192.168.8.11:/iso 25G 7.5G 18G 30% /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso 172.172.1.11:/data-hdd 1.2T 308G 892G 26% /rhev/data-center/mnt/glusterSD/172.172.1.11:_data-hdd
-------- --Jim
On Tue, May 29, 2018 at 11:53 PM, Jim Kusznir <jim@palousetech.com> wrote:
I've just gotten back to where hsoted-engine --vm-status does (eventually) return some output other than the ha-agent or storage is down error. It does currently take up to two minutes, and the last run returned stale data.
Gluster-wise, gluster volume heal <volume> info shows the three volumes with all bricks are all showing 0 entries (and return quickly). The 4th volume missing one brick (but still has two replicas) does return entries, and is currently taking a very long time to come back with them.
Making progress toward getting it online again...Don't know why I get stale data in hosted-engine --vm-status or how to overcome that...
--Jim
On Tue, May 29, 2018 at 11:38 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso
------------------------------------------------------------------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I also finally found the following in my system log on one server: > > [10679.524491] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10679.527150] Call Trace: > [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.527268] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.527279] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [10679.527283] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [10679.529961] Call Trace: > [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.530046] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.530058] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than > 120 seconds. > [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 > 0x00000080 > [10679.533738] Call Trace: > [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.533858] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.533873] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [10919.512757] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10919.516677] Call Trace: > [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 > [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 > [xfs] > [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 > [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] > [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 > [xfs] > [10919.516902] [<ffffffffc061b279>] ? > xfs_trans_read_buf_map+0x199/0x400 [xfs] > [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] > [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 > [xfs] > [10919.517022] [<ffffffffc061b279>] > xfs_trans_read_buf_map+0x199/0x400 [xfs] > [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] > [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 > [xfs] > [10919.517126] [<ffffffffc05c9fee>] > xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] > [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 > [xfs] > [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] > [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] > [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] > [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 > [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 > [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 > [10919.517296] [<ffffffffb94d3753>] ? > selinux_file_free_security+0x23/0x30 > [10919.517304] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [10919.517309] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [10919.517313] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [10919.517318] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 > [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 > [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10919.517339] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than > 120 seconds. > [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 > 0x00000080 > [11159.498984] Call Trace: > [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11159.499056] [<ffffffffc05dd9b7>] ? > xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] > [11159.499082] [<ffffffffc05dd43e>] ? > xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] > [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 > [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 > [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 > [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 > [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 > [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 > [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 > [11159.499125] [<ffffffffb9394e85>] > grab_cache_page_write_begin+0x55/0xc0 > [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 > [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 > [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 > [11159.499149] [<ffffffffb9485621>] > iomap_file_buffered_write+0xa1/0xe0 > [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499182] [<ffffffffc05f025d>] > xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] > [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 > [xfs] > [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11159.499230] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11159.499238] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than > 120 seconds. > [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 > 0x00000000 > [11279.491671] Call Trace: > [11279.491682] [<ffffffffb92a3a2e>] ? > try_to_del_timer_sync+0x5e/0x90 > [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] > [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] > [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] > [11279.491849] [<ffffffffc0619e80>] ? > xfs_trans_ail_cursor_first+0x90/0x90 [xfs] > [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] > [11279.491913] [<ffffffffc0619e80>] ? > xfs_trans_ail_cursor_first+0x90/0x90 [xfs] > [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 > [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491932] [<ffffffffb9920677>] > ret_from_fork_nospec_begin+0x21/0x21 > [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more > than 120 seconds. > [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 > 0x00000080 > [11279.494957] Call Trace: > [11279.494979] [<ffffffffc0309839>] ? > __split_and_process_bio+0x2e9/0x520 [dm_mod] > [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.495005] [<ffffffffb99145ad>] > wait_for_completion_io+0xfd/0x140 > [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.495049] [<ffffffffc06064b9>] > xfs_blkdev_issue_flush+0x19/0x20 [xfs] > [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.495090] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.495102] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [11279.495105] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [11279.498118] Call Trace: > [11279.498134] [<ffffffffc0309839>] ? > __split_and_process_bio+0x2e9/0x520 [dm_mod] > [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.498160] [<ffffffffb99145ad>] > wait_for_completion_io+0xfd/0x140 > [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.498202] [<ffffffffc06064b9>] > xfs_blkdev_issue_flush+0x19/0x20 [xfs] > [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.498242] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.498254] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than > 120 seconds. > [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 > 0x00000080 > [11279.501348] Call Trace: > [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 > [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 > [xfs] > [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11279.501479] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.501489] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than > 120 seconds. > [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 > 0x00000080 > [11279.504635] Call Trace: > [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.504718] [<ffffffffb992076f>] ? > system_call_after_swapgs+0xbc/0x160 > [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.504730] [<ffffffffb992077b>] ? > system_call_after_swapgs+0xc8/0x160 > [12127.466494] perf: interrupt took too long (8263 > 8150), lowering > kernel.perf_event_max_sample_rate to 24000 > > -------------------- > I think this is the cause of the massive ovirt performance issues > irrespective of gluster volume. At the time this happened, I was also > ssh'ed into the host, and was doing some rpm querry commands. I had just > run rpm -qa |grep glusterfs (to verify what version was actually > installed), and that command took almost 2 minutes to return! Normally it > takes less than 2 seconds. That is all pure local SSD IO, too.... > > I'm no expert, but its my understanding that anytime a software > causes these kinds of issues, its a serious bug in the software, even if > its mis-handled exceptions. Is this correct? > > --Jim > > On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I think this is the profile information for one of the volumes that >> lives on the SSDs and is fully operational with no down/problem disks: >> >> [root@ovirt2 yum.repos.d]# gluster volume profile data info >> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 1113 >> 302 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 88608 >> 53526 >> No. of Writes: 522 812340 >> 76257 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 54351 241901 >> 15024 >> No. of Writes: 21636 8656 >> 8976 >> >> Block Size: 131072b+ >> No. of Reads: 524156 >> No. of Writes: 296071 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 4189 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1257 RELEASEDIR >> 0.00 46.19 us 12.00 us 187.00 us >> 69 FLUSH >> 0.00 147.00 us 78.00 us 367.00 us 86 >> REMOVEXATTR >> 0.00 223.46 us 24.00 us 1166.00 us >> 149 READDIR >> 0.00 565.34 us 76.00 us 3639.00 us >> 88 FTRUNCATE >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 98.84 us 2.00 us 880.00 us >> 1198 OPENDIR >> 0.00 91.59 us 26.00 us 10371.00 us >> 3853 STATFS >> 0.00 494.14 us 17.00 us 193439.00 us >> 1171 GETXATTR >> 0.00 299.42 us 35.00 us 9799.00 us >> 2044 READDIRP >> 0.00 1965.31 us 110.00 us 382258.00 us >> 321 XATTROP >> 0.01 113.40 us 24.00 us 61061.00 us >> 8134 STAT >> 0.01 755.38 us 57.00 us 607603.00 us >> 3196 DISCARD >> 0.05 2690.09 us 58.00 us 2704761.00 us >> 3206 OPEN >> 0.10 119978.25 us 97.00 us 9406684.00 us >> 154 SETATTR >> 0.18 101.73 us 28.00 us 700477.00 us >> 313379 FSTAT >> 0.23 1059.84 us 25.00 us 2716124.00 us >> 38255 LOOKUP >> 0.47 1024.11 us 54.00 us 6197164.00 us >> 81455 FXATTROP >> 1.72 2984.00 us 15.00 us 37098954.00 us >> 103020 FINODELK >> 5.92 44315.32 us 51.00 us 24731536.00 us >> 23957 FSYNC >> 13.27 2399.78 us 25.00 us 22089540.00 us >> 991005 READ >> 37.00 5980.43 us 52.00 us 22099889.00 us >> 1108976 WRITE >> 41.04 5452.75 us 13.00 us 22102452.00 us >> 1349053 INODELK >> >> Duration: 10026 seconds >> Data Read: 80046027759 bytes >> Data Written: 44496632320 bytes >> >> Interval 1 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 838 >> 185 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 85856 >> 51575 >> No. of Writes: 382 705802 >> 57812 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 52673 232093 >> 14984 >> No. of Writes: 13499 4908 >> 4242 >> >> Block Size: 131072b+ >> No. of Reads: 460040 >> No. of Writes: 6411 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 53.38 us 26.00 us 111.00 us >> 16 FLUSH >> 0.00 145.14 us 78.00 us 367.00 us 71 >> REMOVEXATTR >> 0.00 190.96 us 114.00 us 298.00 us >> 71 SETATTR >> 0.00 213.38 us 24.00 us 1145.00 us >> 90 READDIR >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 101.76 us 2.00 us 880.00 us >> 1093 OPENDIR >> 0.01 93.60 us 27.00 us 10371.00 us >> 3090 STATFS >> 0.02 537.47 us 17.00 us 193439.00 us >> 1038 GETXATTR >> 0.03 297.44 us 35.00 us 9799.00 us >> 1990 READDIRP >> 0.03 2357.28 us 110.00 us 382258.00 us >> 253 XATTROP >> 0.04 385.93 us 58.00 us 47593.00 us >> 2091 OPEN >> 0.04 114.86 us 24.00 us 61061.00 us >> 7715 STAT >> 0.06 444.59 us 57.00 us 333240.00 us >> 3053 DISCARD >> 0.42 316.24 us 25.00 us 290728.00 us >> 29823 LOOKUP >> 0.73 257.92 us 54.00 us 344812.00 us >> 63296 FXATTROP >> 1.37 98.30 us 28.00 us 67621.00 us >> 313172 FSTAT >> 1.58 2124.69 us 51.00 us 849200.00 us >> 16717 FSYNC >> 5.73 162.46 us 52.00 us 748492.00 us >> 794079 WRITE >> 7.19 2065.17 us 16.00 us 37098954.00 us >> 78381 FINODELK >> 36.44 886.32 us 25.00 us 2216436.00 us >> 925421 READ >> 46.30 1178.04 us 13.00 us 1700704.00 us >> 884635 INODELK >> >> Duration: 7485 seconds >> Data Read: 71250527215 bytes >> Data Written: 5119903744 bytes >> >> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 3264419 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 90 FORGET >> 0.00 0.00 us 0.00 us 0.00 us >> 9462 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 4254 RELEASEDIR >> 0.00 50.52 us 13.00 us 190.00 us >> 71 FLUSH >> 0.00 186.97 us 87.00 us 713.00 us 86 >> REMOVEXATTR >> 0.00 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.00 220.98 us 129.00 us 513.00 us >> 86 SETATTR >> 0.01 259.30 us 26.00 us 2632.00 us >> 137 READDIR >> 0.02 322.76 us 145.00 us 2125.00 us >> 321 XATTROP >> 0.03 109.55 us 2.00 us 1258.00 us >> 1193 OPENDIR >> 0.05 70.21 us 21.00 us 431.00 us >> 3196 DISCARD >> 0.05 169.26 us 21.00 us 2315.00 us >> 1545 GETXATTR >> 0.12 176.85 us 63.00 us 2844.00 us >> 3206 OPEN >> 0.61 303.49 us 90.00 us 3085.00 us >> 9633 FSTAT >> 2.44 305.66 us 28.00 us 3716.00 us >> 38230 LOOKUP >> 4.52 266.22 us 55.00 us 53424.00 us >> 81455 FXATTROP >> 6.96 1397.99 us 51.00 us 64822.00 us >> 23889 FSYNC >> 16.48 84.74 us 25.00 us 6917.00 us >> 932592 WRITE >> 30.16 106.90 us 13.00 us 3920189.00 us >> 1353046 INODELK >> 38.55 1794.52 us 14.00 us 16210553.00 us >> 103039 FINODELK >> >> Duration: 66562 seconds >> Data Read: 0 bytes >> Data Written: 3264419 bytes >> >> Interval 1 Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 794080 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 70.31 us 26.00 us 125.00 us >> 16 FLUSH >> 0.00 193.10 us 103.00 us 713.00 us 71 >> REMOVEXATTR >> 0.01 227.32 us 133.00 us 513.00 us >> 71 SETATTR >> 0.01 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.01 259.83 us 35.00 us 1138.00 us >> 89 READDIR >> 0.03 318.26 us 145.00 us 2047.00 us >> 253 XATTROP >> 0.04 112.67 us 3.00 us 1258.00 us >> 1093 OPENDIR >> 0.06 167.98 us 23.00 us 1951.00 us >> 1014 GETXATTR >> 0.08 70.97 us 22.00 us 431.00 us >> 3053 DISCARD >> 0.13 183.78 us 66.00 us 2844.00 us >> 2091 OPEN >> 1.01 303.82 us 90.00 us 3085.00 us >> 9610 FSTAT >> 3.27 316.59 us 30.00 us 3716.00 us >> 29820 LOOKUP >> 5.83 265.79 us 59.00 us 53424.00 us >> 63296 FXATTROP >> 7.95 1373.89 us 51.00 us 64822.00 us >> 16717 FSYNC >> 23.17 851.99 us 14.00 us 16210553.00 us >> 78555 FINODELK >> 24.04 87.44 us 27.00 us 6917.00 us >> 794081 WRITE >> 34.36 111.91 us 14.00 us 984871.00 us >> 886790 INODELK >> >> Duration: 7485 seconds >> Data Read: 0 bytes >> Data Written: 794080 bytes >> >> >> ----------------------- >> Here is the data from the volume that is backed by the SHDDs and >> has one failed disk: >> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >> Brick: 172.172.1.12:/gluster/brick3/data-hdd >> -------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.74 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2331.74 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.75 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.19 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.70 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.55 2546.43 us 26.00 us 5077347.00 us >> 786060 READ >> 31.56 49699.15 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> Interval 0 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.73 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2334.57 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.49 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.32 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.71 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.56 2546.42 us 26.00 us 5077347.00 us >> 786060 READ >> 31.54 49651.63 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> >> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Thank you for your response. >>> >>> I have 4 gluster volumes. 3 are >>> re_______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4U3ZMQT5ULU34... >>> >>

Thank you. I called it a sufficient enough win to go to bed when my production VoIP server was functional again, although I'm getting periodic warnings that it is unresponsive due to the gluster issues still plaguing it. I'm pretty sure the gluster issues I now experience are due to the upgrade from 3.8 (or was it 3.9) to 3.12 as included in ovirt. I too have been quite concerned about gluster, as it seems the majority of major issues I experience are gluster-based. I still have about 90% of my infrastructure down, and I still only have one working ovirt engine/host. On Wed, May 30, 2018 at 5:09 AM, Jayme <jaymef@gmail.com> wrote:
as someone with hardware on the way to build a 3 node HCI cluster with only 2 2tb SSDs per node this problem concerns me, I sometimes worry about the complexity glusterfs adds to the mix and how how difficult it can be to manage when things go wrong.
Ive been following this thread and feel for you. You must be getting tired, been there done that and don't wish it upon anyone. I hope you get your environment back in working order soon and at some point discover what went wrong to cause this problem.
On Wed, May 30, 2018, 8:55 AM Jim Kusznir, <jim@palousetech.com> wrote:
So, I appear to be at the point where gluster appears to be functional, but engine still won't start. Here's from agent.log on one machine after giving the hosted-engine --vm-start command:
MainThread::INFO::2018-05-29 23:52:09,953::hosted_engine:: 948::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-29 23:52:10,230::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent MainThread::INFO::2018-05-29 23:56:16,061::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-29 23:56:16,075::states::779:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is unexpectedly down. MainThread::INFO::2018-05-29 23:56:16,076::state_ decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineMaybeAway'> MainThread::INFO::2018-05-29 23:56:16,359::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineMaybeAway) sent? sent MainThread::INFO::2018-05-29 23:56:16,554::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/ 9f4760ee-119c-412a-a1e8-49e73e6ba929 --------------- here's an excerpt from vdsm.log: 2018-05-30 00:00:23,114-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.02 seconds (__init__:573) 2018-05-30 00:00:23,115-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') VM wrapper has started (vm:2764) 2018-05-30 00:00:23,125-0700 INFO (vm/50392390) [vdsm.api] START getVolumeSize(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', volUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', options=None) from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:46) 2018-05-30 00:00:23,130-0700 INFO (vm/50392390) [vdsm.api] FINISH getVolumeSize return={'truesize': '11298571776', 'apparentsize': '10737418240'} from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:52) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vds] prepared volume path: (clientIF:497) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vdsm.api] START prepareImage(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', leafUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', allowIllegal=False) from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:46) 2018-05-30 00:00:23,155-0700 INFO (vm/50392390) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/192.168.8.11:_ engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (fileSD:622) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/ c0acdefb-7d16-48ec-9d76-659b8fe33e2a' (fileSD:576) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a mode: None (fileUtils:197) 2018-05-30 00:00:23,157-0700 INFO (vm/50392390) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.8.11:_ engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 to /var/run/vdsm/storage/ c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 (fileSD:579) 2018-05-30 00:00:23,228-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:23,418-0700 INFO (vm/50392390) [vdsm.api] FINISH prepareImage return={'info': {'path': u'engine/c0acdefb-7d16-48ec- 9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363- 38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.8.11'}, {'port': '0', 'transport': 'tcp', 'name': 'ovirt1.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': 'ovirt2.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': 'ovirt3.nwfiber.com'}], 'protocol': 'gluster'}, 'path': u'/var/run/vdsm/storage/ c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0- 4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'imgVolumesInfo': [{'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11: _engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_ engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'}]} from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:52) 2018-05-30 00:00:23,419-0700 INFO (vm/50392390) [vds] prepared volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/ d0fcd8de-6105-4f33-a674-727e3a11e89f (clientIF:497) 2018-05-30 00:00:23,420-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Enabling drive monitoring (drivemonitor:54) 2018-05-30 00:00:23,434-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'hdc' path: 'file=' -> '*file=' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'vda' path: 'file=/rhev/data-center/00000000-0000-0000-0000- 000000000000/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' -> u'*file=/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/ d0fcd8de-6105-4f33-a674-727e3a11e89f' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Using drive lease: {'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_ engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_ engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/ 3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'} (lease:327) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f path: 'LEASE-PATH:d0fcd8de-6105-4f33-a674-727e3a11e89f: c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> u'/rhev/data-center/mnt/ glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec- 9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363- 38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease' (lease:316) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f offset: 'LEASE-OFFSET:d0fcd8de-6105-4f33-a674-727e3a11e89f: c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> 0 (lease:316) 2018-05-30 00:00:23,580-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=Failed to get Open VSwitch system-id . err = ['ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)'] (hooks:110) 2018-05-30 00:00:23,582-0700 INFO (jsonrpc/0) [api.host] FINISH getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info': {u'HBAInventory': {u'iSCSI': [{u'InitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171'}], u'FC': []}, u'packages2': {u'kernel': {u'release': u'862.3.2.el7.x86_64', u'version': u'3.10.0'}, u'glusterfs-rdma': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-fuse': {u'release': u'1.el7', u'version': u'3.12.9'}, u'spice-server': {u'release': u'2.el7_5.3', u'version': u'0.14.0'}, u'librbd1': {u'release': u'2.el7', u'version': u'0.94.5'}, u'vdsm': {u'release': u'1.el7.centos', u'version': u'4.20.27.1'}, u'qemu-kvm': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'openvswitch': {u'release': u'3.el7', u'version': u'2.9.0'}, u'libvirt': {u'release': u'14.el7_5.5', u'version': u'3.9.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7.centos', u'version': u'2.2.11'}, u'qemu-img': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'mom': {u'release': u'1.el7.centos', u'version': u'0.5.12'}, u'glusterfs': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-server': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-geo-replication': {u'release': u'1.el7', u'version': u'3.12.9'}}, u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel': u'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', u'nestedVirtualization': False, u'liveMerge': u'true', u'hooks': {u'before_vm_start': {u'50_hostedengine': {u'md5': u' 95c810cdcfe4195302a59574a5148289'}, u'50_vhostmd': {u'md5': u' 9206bc390bcbf208b06a8e899581be2d'}, u'50_vfio_mdev': {u'md5': u' b77eaad7b8f5ca3825dca1b38b0dd446'}}, u'after_network_setup': {u'30_ethtool_options': {u'md5': u'f04c2ca5dce40663e2ed69806eea917c'}}, u'after_vm_destroy': {u'50_vhostmd': {u'md5': u' d70f7ee0453f632e87c09a157ff8ff66'}, u'50_vfio_mdev': {u'md5': u' 35364c6e08ecc58410e31af6c73b17f6'}}, u'after_vm_start': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}}, u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u' f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_network_setup': {u'50_fcoe': {u'md5': u'28c352339c8beef1e1b05c67d106d062'}}, u'after_get_caps': {u'openstacknet_utils.py': {u'md5': u' 640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u' 0e00c63ab44a952e722209ea31fd7a71'}, u'ovirt_provider_ovn_hook': {u'md5': u'257990644ee6c7824b522e2ab3077ae7'}}, u'after_nic_hotplug': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_migrate_destination': {u'50_vhostmd': {u'md5': u' 9206bc390bcbf208b06a8e899581be2d'}}, u'after_device_create': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_dehibernate': {u'50_vhostmd': {u'md5': u' 9206bc390bcbf208b06a8e899581be2d'}}, u'before_nic_hotplug': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}, u'before_device_migrate_destination': {u'50_vmfex': {u'md5': u' 49caba1a5faadd8efacef966f79bc30a'}}, u'before_device_create': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}}, u'supportsIPv6': True, u'realtimeKernel': False, u'vmTypes': [u'kvm'], u'liveSnapshot': u'true', u'cpuThreads': u'16', u'kdumpStatus': 0, u'networks': {u'ovirtmgmt': {u'iface': u'ovirtmgmt', u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'ports': [u'em1']}, u'Public_Fiber': {u'iface': u'Public_Fiber', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'ports': [u'em1.3']}, u'Gluster': {u'iface': u'em4', u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': False, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u' 172.172.1.12/24'], u'interface': u'em4', u'ipv6gateway': u'::', u'gateway': u''}}, u'kernelArgs': u'BOOT_IMAGE=/vmlinuz-3.10.0-862.3.2.el7.x86_64 root=/dev/mapper/centos_ovirt-root ro crashkernel=auto rd.lvm.lv=centos_ovirt/root rd.lvm.lv=centos_ovirt/swap rhgb quiet LANG=en_US.UTF-8', u'bridges': {u'Public_Fiber': {u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'25509', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1.3']}, u'ovirtmgmt': {u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'13221', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1']}}, u'uuid': u'4C4C4544-005A-4710-8034-B2C04F4C4B31', u'onlineCpus': u'0,2,4,6,8,10,12,14,1,3,5,7,9,11,13,15', u'nameservers': [u'192.168.8.1', u'8.8.8.8'], u'nics': {u'em4': {u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u'172.172.1.12/24'], u'hwaddr': u'00:21:9b:90:07:16', u'ipv6gateway': u'::', u'gateway': u''}, u'em1': {u'ipv6autoconf': False, u'addr': u'', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:10', u'ipv6gateway': u'::', u'gateway': u''}, u'em3': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:14', u'ipv6gateway': u'::', u'gateway': u''}, u'em2': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:12', u'ipv6gateway': u'::', u'gateway': u''}}, u'software_revision': u'1', u'hostdevPassthrough': u'false', u'clusterLevels': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,nopl,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16, xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,tpr_shadow, vnmi,flexpriority,ept,vpid,dtherm,ida,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', u'kernelFeatures': {u'RETP': 1, u'IBRS': 0, u'PTI': 1}, u'ISCSIInitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171', u'netConfigDirty': u'False', u'selinux': {u'mode': u'1'}, u'autoNumaBalancing': 1, u'reservedMem': u'321', u'containers': False, u'bondings': {}, u'software_version': u'4.20', u'supportedENGINEs': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuSpeed': u'2527.017', u'numaNodes': {u'1': {u'totalMemory': u'16371', u'cpus': [1, 3, 5, 7, 9, 11, 13, 15]}, u'0': {u'totalMemory': u'16384', u'cpus': [0, 2, 4, 6, 8, 10, 12, 14]}}, u'cpuSockets': u'2', u'vlans': {u'em1.3': {u'iface': u'em1', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'vlanid': 3, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u''}}, u'version_name': u'Snow Man', 'lastClientIface': 'lo', u'cpuCores': u'8', u'hostedEngineDeployed': True, u'hugepages': [2048], u'guestOverhead': u'65', u'additionalFeatures': [u'libgfapi_supported', u'GLUSTER_SNAPSHOT', u'GLUSTER_GEO_REPLICATION', u'GLUSTER_BRICK_MANAGEMENT'], u'kvmEnabled': u'true', u'memSize': u'31996', u'emulatedMachines': [u'pc-i440fx-rhel7.1.0', u'pc-q35-rhel7.3.0', u'rhel6.3.0', u'pc-i440fx-rhel7.5.0', u'pc-i440fx-rhel7.0.0', u'rhel6.1.0', u'pc-i440fx-rhel7.4.0', u'rhel6.6.0', u'pc-q35-rhel7.5.0', u'rhel6.2.0', u'pc', u'pc-i440fx-rhel7.3.0', u'q35', u'pc-i440fx-rhel7.2.0', u'rhel6.4.0', u'pc-q35-rhel7.4.0', u'rhel6.0.0', u'rhel6.5.0'], u'rngSources': [u'hwrng', u'random'], u'operatingSystem': {u'release': u'5.1804.el7.centos.2', u'pretty_name': u'CentOS Linux 7 (Core)', u'version': u'7', u'name': u'RHEL'}}} from=::1,35660 (api:52) 2018-05-30 00:00:23,593-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 3.26 seconds (__init__:573) 2018-05-30 00:00:23,724-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_openstacknet: rc=0 err= (hooks:110) 2018-05-30 00:00:23,919-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_vmfex: rc=0 err= (hooks:110) 2018-05-30 00:00:24,213-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:24,392-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook: rc=0 err= (hooks:110) 2018-05-30 00:00:24,401-0700 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,33766 (api:46) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33766 (api:52) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-05-30 00:00:24,623-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err= (hooks:110) 2018-05-30 00:00:24,836-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110) 2018-05-30 00:00:25,022-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110) 2018-05-30 00:00:25,023-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') <?xml version="1.0" encoding="utf-8"?><domain type="kvm" xmlns:ns0="http://ovirt.org/ vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <name>HostedEngine</name> <uuid>50392390-4f89-435c-bf3b-8254b58a4ef7</uuid> <memory>8388608</memory> <currentMemory>8388608</currentMemory> <maxMemory slots="16">33554432</maxMemory> <vcpu current="4">16</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">7-5.1804.el7.centos.2</entry> <entry name="serial">4C4C4544-005A- 4710-8034-B2C04F4C4B31</entry> <entry name="uuid">50392390-4f89- 435c-bf3b-8254b58a4ef7</entry> </system> </sysinfo> <clock adjustment="0" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="16" threads="1"/> <numa> <cell cpus="0,1,2,3" id="0" memory="8388608"/> </numa> </cpu> <cputune/> <devices> <input bus="usb" type="tablet"/> <channel type="unix"> <target name="ovirt-guest-agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/ channels/50392390-4f89-435c-bf3b-8254b58a4ef7.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/ channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.qemu.guest_agent.0"/> </channel> <controller index="0" ports="16" type="virtio-serial"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> </controller> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"> <listen network="vdsm-ovirtmgmt" type="network"/> </graphics> <controller type="usb"> <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/> </controller> <controller type="ide"> <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/> </controller> <video> <model heads="1" type="cirrus" vram="16384"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/> </video> <memballoon model="none"/> <disk device="cdrom" snapshot="no" type="file"> <driver error_policy="report" name="qemu" type="raw"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <address bus="1" controller="0" target="0" type="drive" unit="0"/> </disk> <disk device="disk" snapshot="no" type="file"> <target bus="virtio" dev="vda"/> <source file="/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/ d0fcd8de-6105-4f33-a674-727e3a11e89f"/> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/> <alias name="ua-3de00d8c-d8b0-4ae0-9363-38a504f5d2b2"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <serial>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</serial> </disk> <lease> <key>d0fcd8de-6105-4f33-a674-727e3a11e89f</key> <lockspace>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</lockspace> <target offset="0" path="/rhev/data-center/mnt/ glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec- 9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363- 38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease"/> </lease> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="ovirtmgmt"/> <alias name="ua-9f93c126-cbb3-4c5b-909e-41a53c2e31fb"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <mac address="00:16:3e:2d:bd:b1"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <channel type="unix"><target name="org.ovirt.hosted-engine-setup.0" type="virtio"/><source mode="bind" path="/var/lib/libvirt/qemu/ channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.ovirt. hosted-engine-setup.0"/></channel></devices> <pm> <suspend-to-disk enabled="no"/> <suspend-to-mem enabled="no"/> </pm> <os> <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type> <smbios mode="sysinfo"/> </os> <metadata> <ns0:qos/> <ovirt-vm:vm> <minGuaranteedMemoryMb type="int">8192</ minGuaranteedMemoryMb> <clusterVersion>4.2</clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="00:16:3e:2d:bd:b1"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ ovirt-vm:poolID> <ovirt-vm:volumeID>d0fcd8de-6105-4f33-a674-727e3a11e89f</ ovirt-vm:volumeID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:imageID>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</ ovirt-vm:imageID> <ovirt-vm:domainID>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</ ovirt-vm:domainID> </ovirt-vm:device> <launchPaused>false</launchPaused> <resumeBehavior>auto_resume</resumeBehavior> </ovirt-vm:vm> </metadata> <on_poweroff>destroy</on_poweroff><on_reboot>destroy</ on_reboot><on_crash>destroy</on_crash></domain> (vm:2867) 2018-05-30 00:00:25,252-0700 INFO (monitor/c0acdef) [storage.SANLock] Acquiring host id for domain c0acdefb-7d16-48ec-9d76-659b8fe33e2a (id=2, async=True) (clusterlock:284) 2018-05-30 00:00:26,808-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 00:00:26,809-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
------------- The "failed to acquire lock: no space left on device" error has me conserned...Yet there is no device that is full on any servers (espciially the one I'm on):
Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_ovirt-root 8.0G 6.6G 1.5G 82% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 4.0K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/gluster-data 177G 59G 119G 33% /gluster/brick2 /dev/mapper/gluster-engine 25G 18G 7.9G 69% /gluster/brick1 /dev/mapper/vg_thin-iso 25G 7.5G 18G 30% /gluster/brick4 /dev/mapper/vg_thin-data--hdd 1.2T 308G 892G 26% /gluster/brick3 /dev/sda1 497M 313M 185M 63% /boot 192.168.8.11:/engine 25G 20G 5.5G 79% /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine tmpfs 3.2G 0 3.2G 0% /run/user/0 192.168.8.11:/data 136G 60G 77G 44% /rhev/data-center/mnt/glusterSD/192.168.8.11:_data 192.168.8.11:/iso 25G 7.5G 18G 30% /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso 172.172.1.11:/data-hdd 1.2T 308G 892G 26% /rhev/data-center/mnt/glusterSD/172.172.1.11:_data-hdd
-------- --Jim
On Tue, May 29, 2018 at 11:53 PM, Jim Kusznir <jim@palousetech.com> wrote:
I've just gotten back to where hsoted-engine --vm-status does (eventually) return some output other than the ha-agent or storage is down error. It does currently take up to two minutes, and the last run returned stale data.
Gluster-wise, gluster volume heal <volume> info shows the three volumes with all bricks are all showing 0 entries (and return quickly). The 4th volume missing one brick (but still has two replicas) does return entries, and is currently taking a very long time to come back with them.
Making progress toward getting it online again...Don't know why I get stale data in hosted-engine --vm-status or how to overcome that...
--Jim
On Tue, May 29, 2018 at 11:38 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Well, things went from bad to very, very bad.... > > It appears that during one of the 2 minute lockups, the fencing > agents decided that another node in the cluster was down. As a result, 2 > of the 3 nodes were simultaneously reset with fencing agent reboot. After > the nodes came back up, the engine would not start. All running VMs > (including VMs on the 3rd node that was not rebooted) crashed. > > I've now been working for about 3 hours trying to get the engine to > come up. I don't know why it won't start. hosted-engine --vm-start says > its starting, but it doesn't start (virsh doesn't show any VMs running). > I'm currently running --deploy, as I had run out of options for anything > else I can come up with. I hope this will allow me to re-import all my > existing VMs and allow me to start them back up after everything comes back > up. > > I do have an unverified geo-rep backup; I don't know if it is a good > backup (there were several prior messages to this list, but I didn't get > replies to my questions. It was running in what I believe to be "strange", > and the data directories are larger than their source). > > I'll see if my --deploy works, and if not, I'll be back with another > message/help request. > > When the dust settles and I'm at least minimally functional again, I > really want to understand why all these technologies designed to offer > redundancy conspired to reduce uptime and create failures where there > weren't any otherwise. I thought with hosted engine, 3 ovirt servers and > glusterfs with minimum replica 2+arb or replica 3 should have offered > strong resilience against server failure or disk failure, and should have > prevented / recovered from data corruption. Instead, all of the above > happened (once I get my cluster back up, I still have to try and recover my > webserver VM, which won't boot due to XFS corrupt journal issues created > during the gluster crashes). I think a lot of these issues were rooted > from the upgrade from 4.1 to 4.2. > > --Jim > > On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I also finally found the following in my system log on one server: >> >> [10679.524491] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10679.527150] Call Trace: >> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.527268] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.527279] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [10679.529961] Call Trace: >> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.530046] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.530058] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >> than 120 seconds. >> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >> 1 0x00000080 >> [10679.533738] Call Trace: >> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.533858] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.533873] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.512757] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10919.516677] Call Trace: >> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >> [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 >> [xfs] >> [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 >> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 >> [xfs] >> [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 >> [xfs] >> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >> [xfs] >> [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 >> [xfs] >> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >> [xfs] >> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >> [xfs] >> [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 >> [xfs] >> [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 >> [xfs] >> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >> [xfs] >> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] >> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >> [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+ >> 0x23/0x30 >> [10919.517304] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517309] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517313] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517318] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >> [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10919.517339] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than >> 120 seconds. >> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >> 1 0x00000080 >> [11159.498984] Call Trace: >> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11159.499056] [<ffffffffc05dd9b7>] ? >> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >> [11159.499082] [<ffffffffc05dd43e>] ? >> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+ >> 0x55/0xc0 >> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >> [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+ >> 0xa1/0xe0 >> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 >> [xfs] >> [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 >> [xfs] >> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11159.499230] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11159.499238] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than >> 120 seconds. >> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >> 2 0x00000000 >> [11279.491671] Call Trace: >> [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/ >> 0x90 >> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >> [xfs] >> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] >> [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 >> [xfs] >> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >> [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 >> [xfs] >> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >> [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+ >> 0x21/0x21 >> [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >> than 120 seconds. >> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >> 1 0x00000080 >> [11279.494957] Call Trace: >> [11279.494979] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/ >> 0x140 >> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >> [xfs] >> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.495090] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.495102] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [11279.498118] Call Trace: >> [11279.498134] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/ >> 0x140 >> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >> [xfs] >> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.498242] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.498254] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than >> 120 seconds. >> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >> 1 0x00000080 >> [11279.501348] Call Trace: >> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >> [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 >> [xfs] >> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11279.501479] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.501489] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than >> 120 seconds. >> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >> 1 0x00000080 >> [11279.504635] Call Trace: >> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.504718] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.504730] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [12127.466494] perf: interrupt took too long (8263 > 8150), >> lowering kernel.perf_event_max_sample_rate to 24000 >> >> -------------------- >> I think this is the cause of the massive ovirt performance issues >> irrespective of gluster volume. At the time this happened, I was also >> ssh'ed into the host, and was doing some rpm querry commands. I had just >> run rpm -qa |grep glusterfs (to verify what version was actually >> installed), and that command took almost 2 minutes to return! Normally it >> takes less than 2 seconds. That is all pure local SSD IO, too.... >> >> I'm no expert, but its my understanding that anytime a software >> causes these kinds of issues, its a serious bug in the software, even if >> its mis-handled exceptions. Is this correct? >> >> --Jim >> >> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I think this is the profile information for one of the volumes >>> that lives on the SSDs and is fully operational with no down/problem disks: >>> >>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 1113 >>> 302 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 88608 >>> 53526 >>> No. of Writes: 522 812340 >>> 76257 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 54351 241901 >>> 15024 >>> No. of Writes: 21636 8656 >>> 8976 >>> >>> Block Size: 131072b+ >>> No. of Reads: 524156 >>> No. of Writes: 296071 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4189 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1257 RELEASEDIR >>> 0.00 46.19 us 12.00 us 187.00 us >>> 69 FLUSH >>> 0.00 147.00 us 78.00 us 367.00 us >>> 86 REMOVEXATTR >>> 0.00 223.46 us 24.00 us 1166.00 us >>> 149 READDIR >>> 0.00 565.34 us 76.00 us 3639.00 us >>> 88 FTRUNCATE >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 98.84 us 2.00 us 880.00 us >>> 1198 OPENDIR >>> 0.00 91.59 us 26.00 us 10371.00 us >>> 3853 STATFS >>> 0.00 494.14 us 17.00 us 193439.00 us >>> 1171 GETXATTR >>> 0.00 299.42 us 35.00 us 9799.00 us >>> 2044 READDIRP >>> 0.00 1965.31 us 110.00 us 382258.00 us >>> 321 XATTROP >>> 0.01 113.40 us 24.00 us 61061.00 us >>> 8134 STAT >>> 0.01 755.38 us 57.00 us 607603.00 us >>> 3196 DISCARD >>> 0.05 2690.09 us 58.00 us 2704761.00 us >>> 3206 OPEN >>> 0.10 119978.25 us 97.00 us 9406684.00 us >>> 154 SETATTR >>> 0.18 101.73 us 28.00 us 700477.00 us >>> 313379 FSTAT >>> 0.23 1059.84 us 25.00 us 2716124.00 us >>> 38255 LOOKUP >>> 0.47 1024.11 us 54.00 us 6197164.00 us >>> 81455 FXATTROP >>> 1.72 2984.00 us 15.00 us 37098954.00 us >>> 103020 FINODELK >>> 5.92 44315.32 us 51.00 us 24731536.00 us >>> 23957 FSYNC >>> 13.27 2399.78 us 25.00 us 22089540.00 us >>> 991005 READ >>> 37.00 5980.43 us 52.00 us 22099889.00 us >>> 1108976 WRITE >>> 41.04 5452.75 us 13.00 us 22102452.00 us >>> 1349053 INODELK >>> >>> Duration: 10026 seconds >>> Data Read: 80046027759 bytes >>> Data Written: 44496632320 bytes >>> >>> Interval 1 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 838 >>> 185 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 85856 >>> 51575 >>> No. of Writes: 382 705802 >>> 57812 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 52673 232093 >>> 14984 >>> No. of Writes: 13499 4908 >>> 4242 >>> >>> Block Size: 131072b+ >>> No. of Reads: 460040 >>> No. of Writes: 6411 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 53.38 us 26.00 us 111.00 us >>> 16 FLUSH >>> 0.00 145.14 us 78.00 us 367.00 us >>> 71 REMOVEXATTR >>> 0.00 190.96 us 114.00 us 298.00 us >>> 71 SETATTR >>> 0.00 213.38 us 24.00 us 1145.00 us >>> 90 READDIR >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 101.76 us 2.00 us 880.00 us >>> 1093 OPENDIR >>> 0.01 93.60 us 27.00 us 10371.00 us >>> 3090 STATFS >>> 0.02 537.47 us 17.00 us 193439.00 us >>> 1038 GETXATTR >>> 0.03 297.44 us 35.00 us 9799.00 us >>> 1990 READDIRP >>> 0.03 2357.28 us 110.00 us 382258.00 us >>> 253 XATTROP >>> 0.04 385.93 us 58.00 us 47593.00 us >>> 2091 OPEN >>> 0.04 114.86 us 24.00 us 61061.00 us >>> 7715 STAT >>> 0.06 444.59 us 57.00 us 333240.00 us >>> 3053 DISCARD >>> 0.42 316.24 us 25.00 us 290728.00 us >>> 29823 LOOKUP >>> 0.73 257.92 us 54.00 us 344812.00 us >>> 63296 FXATTROP >>> 1.37 98.30 us 28.00 us 67621.00 us >>> 313172 FSTAT >>> 1.58 2124.69 us 51.00 us 849200.00 us >>> 16717 FSYNC >>> 5.73 162.46 us 52.00 us 748492.00 us >>> 794079 WRITE >>> 7.19 2065.17 us 16.00 us 37098954.00 us >>> 78381 FINODELK >>> 36.44 886.32 us 25.00 us 2216436.00 us >>> 925421 READ >>> 46.30 1178.04 us 13.00 us 1700704.00 us >>> 884635 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 71250527215 bytes >>> Data Written: 5119903744 bytes >>> >>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 3264419 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 90 FORGET >>> 0.00 0.00 us 0.00 us 0.00 us >>> 9462 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4254 RELEASEDIR >>> 0.00 50.52 us 13.00 us 190.00 us >>> 71 FLUSH >>> 0.00 186.97 us 87.00 us 713.00 us >>> 86 REMOVEXATTR >>> 0.00 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.00 220.98 us 129.00 us 513.00 us >>> 86 SETATTR >>> 0.01 259.30 us 26.00 us 2632.00 us >>> 137 READDIR >>> 0.02 322.76 us 145.00 us 2125.00 us >>> 321 XATTROP >>> 0.03 109.55 us 2.00 us 1258.00 us >>> 1193 OPENDIR >>> 0.05 70.21 us 21.00 us 431.00 us >>> 3196 DISCARD >>> 0.05 169.26 us 21.00 us 2315.00 us >>> 1545 GETXATTR >>> 0.12 176.85 us 63.00 us 2844.00 us >>> 3206 OPEN >>> 0.61 303.49 us 90.00 us 3085.00 us >>> 9633 FSTAT >>> 2.44 305.66 us 28.00 us 3716.00 us >>> 38230 LOOKUP >>> 4.52 266.22 us 55.00 us 53424.00 us >>> 81455 FXATTROP >>> 6.96 1397.99 us 51.00 us 64822.00 us >>> 23889 FSYNC >>> 16.48 84.74 us 25.00 us 6917.00 us >>> 932592 WRITE >>> 30.16 106.90 us 13.00 us 3920189.00 us >>> 1353046 INODELK >>> 38.55 1794.52 us 14.00 us 16210553.00 us >>> 103039 FINODELK >>> >>> Duration: 66562 seconds >>> Data Read: 0 bytes >>> Data Written: 3264419 bytes >>> >>> Interval 1 Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 794080 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 70.31 us 26.00 us 125.00 us >>> 16 FLUSH >>> 0.00 193.10 us 103.00 us 713.00 us >>> 71 REMOVEXATTR >>> 0.01 227.32 us 133.00 us 513.00 us >>> 71 SETATTR >>> 0.01 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.01 259.83 us 35.00 us 1138.00 us >>> 89 READDIR >>> 0.03 318.26 us 145.00 us 2047.00 us >>> 253 XATTROP >>> 0.04 112.67 us 3.00 us 1258.00 us >>> 1093 OPENDIR >>> 0.06 167.98 us 23.00 us 1951.00 us >>> 1014 GETXATTR >>> 0.08 70.97 us 22.00 us 431.00 us >>> 3053 DISCARD >>> 0.13 183.78 us 66.00 us 2844.00 us >>> 2091 OPEN >>> 1.01 303.82 us 90.00 us 3085.00 us >>> 9610 FSTAT >>> 3.27 316.59 us 30.00 us 3716.00 us >>> 29820 LOOKUP >>> 5.83 265.79 us 59.00 us 53424.00 us >>> 63296 FXATTROP >>> 7.95 1373.89 us 51.00 us 64822.00 us >>> 16717 FSYNC >>> 23.17 851.99 us 14.00 us 16210553.00 us >>> 78555 FINODELK >>> 24.04 87.44 us 27.00 us 6917.00 us >>> 794081 WRITE >>> 34.36 111.91 us 14.00 us 984871.00 us >>> 886790 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 0 bytes >>> Data Written: 794080 bytes >>> >>> >>> ----------------------- >>> Here is the data from the volume that is backed by the SHDDs and >>> has one failed disk: >>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>> -------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.74 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2331.74 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.75 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> 12.49 7735.19 us 51.00 us 3595513.00 us >>> 131642 WRITE >>> 17.70 957.08 us 16.00 us 13700466.00 us >>> 1508160 INODELK >>> 24.55 2546.43 us 26.00 us 5077347.00 us >>> 786060 READ >>> 31.56 49699.15 us 47.00 us 3746331.00 us >>> 51777 FSYNC >>> >>> Duration: 10101 seconds >>> Data Read: 101562897361 bytes >>> Data Written: 4834450432 bytes >>> >>> Interval 0 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.73 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2334.57 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.49 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> 12.49 7735.32 us 51.00 us 3595513.00 us >>> 131642 WRITE >>> 17.71 957.08 us 16.00 us 13700466.00 us >>> 1508160 INODELK >>> 24.56 2546.42 us 26.00 us 5077347.00 us >>> 786060 READ >>> 31.54 49651.63 us 47.00 us 3746331.00 us >>> 51777 FSYNC >>> >>> Duration: 10101 seconds >>> Data Read: 101562897361 bytes >>> Data Written: 4834450432 bytes >>> >>> >>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> Thank you for your response. >>>> >>>> I have 4 gluster volumes. 3 are re____________________________ >>>> ___________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/ >>>> community/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/ >>>> archives/list/users@ovirt.org/message/ >>>> B4U3ZMQT5ULU34RG6OTDWTZULZN3QFHG/ >>>> >>>

It is my impression also that there are several moving parts that are not thoroughly tested before every release cycle and forces one to be hesitant to use in production. Overall is a great product but seems that the actual testing is left to the end users with all its implicit difficulties and pain. I understand the bargain though. Alex On Wed, May 30, 2018, 22:28 Jayme <jaymef@gmail.com> wrote:
as someone with hardware on the way to build a 3 node HCI cluster with only 2 2tb SSDs per node this problem concerns me, I sometimes worry about the complexity glusterfs adds to the mix and how how difficult it can be to manage when things go wrong.
Ive been following this thread and feel for you. You must be getting tired, been there done that and don't wish it upon anyone. I hope you get your environment back in working order soon and at some point discover what went wrong to cause this problem.
On Wed, May 30, 2018, 8:55 AM Jim Kusznir, <jim@palousetech.com> wrote:
So, I appear to be at the point where gluster appears to be functional, but engine still won't start. Here's from agent.log on one machine after giving the hosted-engine --vm-start command:
MainThread::INFO::2018-05-29 23:52:09,953::hosted_engine::948::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-29 23:52:10,230::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent MainThread::INFO::2018-05-29 23:56:16,061::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-29 23:56:16,075::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) VM is unexpectedly down. MainThread::INFO::2018-05-29 23:56:16,076::state_decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineMaybeAway'> MainThread::INFO::2018-05-29 23:56:16,359::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineMaybeAway) sent? sent MainThread::INFO::2018-05-29 23:56:16,554::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 --------------- here's an excerpt from vdsm.log: 2018-05-30 00:00:23,114-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.create succeeded in 0.02 seconds (__init__:573) 2018-05-30 00:00:23,115-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') VM wrapper has started (vm:2764) 2018-05-30 00:00:23,125-0700 INFO (vm/50392390) [vdsm.api] START getVolumeSize(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', volUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', options=None) from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:46) 2018-05-30 00:00:23,130-0700 INFO (vm/50392390) [vdsm.api] FINISH getVolumeSize return={'truesize': '11298571776', 'apparentsize': '10737418240'} from=internal, task_id=c6d74ce9-1944-457a-9deb-d0a6929f39ce (api:52) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vds] prepared volume path: (clientIF:497) 2018-05-30 00:00:23,131-0700 INFO (vm/50392390) [vdsm.api] START prepareImage(sdUUID='c0acdefb-7d16-48ec-9d76-659b8fe33e2a', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='3de00d8c-d8b0-4ae0-9363-38a504f5d2b2', leafUUID='d0fcd8de-6105-4f33-a674-727e3a11e89f', allowIllegal=False) from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:46) 2018-05-30 00:00:23,155-0700 INFO (vm/50392390) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (fileSD:622) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.StorageDomain] Creating domain run directory u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a' (fileSD:576) 2018-05-30 00:00:23,156-0700 INFO (vm/50392390) [storage.fileUtils] Creating directory: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a mode: None (fileUtils:197) 2018-05-30 00:00:23,157-0700 INFO (vm/50392390) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 to /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2 (fileSD:579) 2018-05-30 00:00:23,228-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:23,418-0700 INFO (vm/50392390) [vdsm.api] FINISH prepareImage return={'info': {'path': u'engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'type': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': '192.168.8.11'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt1.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt2.nwfiber.com'}, {'port': '0', 'transport': 'tcp', 'name': ' ovirt3.nwfiber.com'}], 'protocol': 'gluster'}, 'path': u'/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'imgVolumesInfo': [{'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'}]} from=internal, task_id=d2797364-270f-4800-92b0-d114441194a3 (api:52) 2018-05-30 00:00:23,419-0700 INFO (vm/50392390) [vds] prepared volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f (clientIF:497) 2018-05-30 00:00:23,420-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Enabling drive monitoring (drivemonitor:54) 2018-05-30 00:00:23,434-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'hdc' path: 'file=' -> '*file=' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') drive 'vda' path: 'file=/rhev/data-center/00000000-0000-0000-0000-000000000000/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' -> u'*file=/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f' (storagexml:323) 2018-05-30 00:00:23,435-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Using drive lease: {'domainID': 'c0acdefb-7d16-48ec-9d76-659b8fe33e2a', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f', 'volumeID': u'd0fcd8de-6105-4f33-a674-727e3a11e89f', 'leasePath': u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease', 'imageID': '3de00d8c-d8b0-4ae0-9363-38a504f5d2b2'} (lease:327) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f path: 'LEASE-PATH:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> u'/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease' (lease:316) 2018-05-30 00:00:23,436-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') lease c0acdefb-7d16-48ec-9d76-659b8fe33e2a:d0fcd8de-6105-4f33-a674-727e3a11e89f offset: 'LEASE-OFFSET:d0fcd8de-6105-4f33-a674-727e3a11e89f:c0acdefb-7d16-48ec-9d76-659b8fe33e2a' -> 0 (lease:316) 2018-05-30 00:00:23,580-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=Failed to get Open VSwitch system-id . err = ['ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)'] (hooks:110) 2018-05-30 00:00:23,582-0700 INFO (jsonrpc/0) [api.host] FINISH getCapabilities return={'status': {'message': 'Done', 'code': 0}, 'info': {u'HBAInventory': {u'iSCSI': [{u'InitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171'}], u'FC': []}, u'packages2': {u'kernel': {u'release': u'862.3.2.el7.x86_64', u'version': u'3.10.0'}, u'glusterfs-rdma': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-fuse': {u'release': u'1.el7', u'version': u'3.12.9'}, u'spice-server': {u'release': u'2.el7_5.3', u'version': u'0.14.0'}, u'librbd1': {u'release': u'2.el7', u'version': u'0.94.5'}, u'vdsm': {u'release': u'1.el7.centos', u'version': u'4.20.27.1'}, u'qemu-kvm': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'openvswitch': {u'release': u'3.el7', u'version': u'2.9.0'}, u'libvirt': {u'release': u'14.el7_5.5', u'version': u'3.9.0'}, u'ovirt-hosted-engine-ha': {u'release': u'1.el7.centos', u'version': u'2.2.11'}, u'qemu-img': {u'release': u'21.el7_5.3.1', u'version': u'2.10.0'}, u'mom': {u'release': u'1.el7.centos', u'version': u'0.5.12'}, u'glusterfs': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-cli': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-server': {u'release': u'1.el7', u'version': u'3.12.9'}, u'glusterfs-geo-replication': {u'release': u'1.el7', u'version': u'3.12.9'}}, u'numaNodeDistance': {u'1': [20, 10], u'0': [10, 20]}, u'cpuModel': u'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', u'nestedVirtualization': False, u'liveMerge': u'true', u'hooks': {u'before_vm_start': {u'50_hostedengine': {u'md5': u'95c810cdcfe4195302a59574a5148289'}, u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}, u'50_vfio_mdev': {u'md5': u'b77eaad7b8f5ca3825dca1b38b0dd446'}}, u'after_network_setup': {u'30_ethtool_options': {u'md5': u'f04c2ca5dce40663e2ed69806eea917c'}}, u'after_vm_destroy': {u'50_vhostmd': {u'md5': u'd70f7ee0453f632e87c09a157ff8ff66'}, u'50_vfio_mdev': {u'md5': u'35364c6e08ecc58410e31af6c73b17f6'}}, u'after_vm_start': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'ea0a5a715da8c1badbcda28e8b8fa00e'}}, u'after_device_migrate_destination': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_network_setup': {u'50_fcoe': {u'md5': u'28c352339c8beef1e1b05c67d106d062'}}, u'after_get_caps': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0e00c63ab44a952e722209ea31fd7a71'}, u'ovirt_provider_ovn_hook': {u'md5': u'257990644ee6c7824b522e2ab3077ae7'}}, u'after_nic_hotplug': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_migrate_destination': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'after_device_create': {u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'f93c12af7454bbba4c1ef445c2bc9860'}}, u'before_vm_dehibernate': {u'50_vhostmd': {u'md5': u'9206bc390bcbf208b06a8e899581be2d'}}, u'before_nic_hotplug': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}, u'before_device_migrate_destination': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}}, u'before_device_create': {u'50_vmfex': {u'md5': u'49caba1a5faadd8efacef966f79bc30a'}, u'openstacknet_utils.py': {u'md5': u'640ef8b473cbc16b102c5ebf57ea533f'}, u'50_openstacknet': {u'md5': u'0438de5ff9b6bf8d3160804ff71bf827'}, u'ovirt_provider_ovn_hook': {u'md5': u'ada7e4e757fd0241b682b5cc6a545fc8'}}}, u'supportsIPv6': True, u'realtimeKernel': False, u'vmTypes': [u'kvm'], u'liveSnapshot': u'true', u'cpuThreads': u'16', u'kdumpStatus': 0, u'networks': {u'ovirtmgmt': {u'iface': u'ovirtmgmt', u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u' 192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'ports': [u'em1']}, u'Public_Fiber': {u'iface': u'Public_Fiber', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': True, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'ports': [u'em1.3']}, u'Gluster': {u'iface': u'em4', u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'dhcpv6': False, u'ipv6addrs': [], u'switch': u'legacy', u'bridged': False, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u' 172.172.1.12/24'], u'interface': u'em4', u'ipv6gateway': u'::', u'gateway': u''}}, u'kernelArgs': u'BOOT_IMAGE=/vmlinuz-3.10.0-862.3.2.el7.x86_64 root=/dev/mapper/centos_ovirt-root ro crashkernel=auto rd.lvm.lv=centos_ovirt/root rd.lvm.lv=centos_ovirt/swap rhgb quiet LANG=en_US.UTF-8', u'bridges': {u'Public_Fiber': {u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'stp': u'off', u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u'', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'25509', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1.3']}, u'ovirtmgmt': {u'ipv6autoconf': False, u'addr': u'192.168.8.12', u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': True, u'netmask': u'255.255.255.0', u'ipv4defaultroute': True, u'stp': u'off', u'ipv4addrs': [u'192.168.8.12/24'], u'ipv6gateway': u'::', u'gateway': u'192.168.8.1', u'opts': {u'multicast_last_member_count': u'2', u'vlan_protocol': u'0x8100', u'hash_elasticity': u'4', u'multicast_query_response_interval': u'1000', u'group_fwd_mask': u'0x0', u'multicast_snooping': u'1', u'multicast_startup_query_interval': u'3125', u'hello_timer': u'0', u'multicast_querier_interval': u'25500', u'max_age': u'2000', u'hash_max': u'512', u'stp_state': u'0', u'topology_change_detected': u'0', u'priority': u'32768', u'multicast_igmp_version': u'2', u'multicast_membership_interval': u'26000', u'root_path_cost': u'0', u'root_port': u'0', u'multicast_stats_enabled': u'0', u'multicast_startup_query_count': u'2', u'nf_call_iptables': u'0', u'vlan_stats_enabled': u'0', u'hello_time': u'200', u'topology_change': u'0', u'bridge_id': u'8000.00219b900710', u'topology_change_timer': u'0', u'ageing_time': u'30000', u'nf_call_ip6tables': u'0', u'multicast_mld_version': u'1', u'gc_timer': u'13221', u'root_id': u'8000.00219b900710', u'nf_call_arptables': u'0', u'group_addr': u'1:80:c2:0:0:0', u'multicast_last_member_interval': u'100', u'default_pvid': u'1', u'multicast_query_interval': u'12500', u'multicast_query_use_ifaddr': u'0', u'tcn_timer': u'0', u'multicast_router': u'1', u'vlan_filtering': u'0', u'multicast_querier': u'0', u'forward_delay': u'0'}, u'ports': [u'em1']}}, u'uuid': u'4C4C4544-005A-4710-8034-B2C04F4C4B31', u'onlineCpus': u'0,2,4,6,8,10,12,14,1,3,5,7,9,11,13,15', u'nameservers': [u'192.168.8.1', u'8.8.8.8'], u'nics': {u'em4': {u'ipv6autoconf': False, u'addr': u'172.172.1.12', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'255.255.255.0', u'ipv4defaultroute': False, u'ipv4addrs': [u'172.172.1.12/24'], u'hwaddr': u'00:21:9b:90:07:16', u'ipv6gateway': u'::', u'gateway': u''}, u'em1': {u'ipv6autoconf': False, u'addr': u'', u'speed': 1000, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:10', u'ipv6gateway': u'::', u'gateway': u''}, u'em3': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:14', u'ipv6gateway': u'::', u'gateway': u''}, u'em2': {u'ipv6autoconf': True, u'addr': u'', u'speed': 0, u'dhcpv6': False, u'ipv6addrs': [], u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'hwaddr': u'00:21:9b:90:07:12', u'ipv6gateway': u'::', u'gateway': u''}}, u'software_revision': u'1', u'hostdevPassthrough': u'false', u'clusterLevels': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,dtherm,ida,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', u'kernelFeatures': {u'RETP': 1, u'IBRS': 0, u'PTI': 1}, u'ISCSIInitiatorName': u'iqn.1994-05.com.redhat:1283ce7a3171', u'netConfigDirty': u'False', u'selinux': {u'mode': u'1'}, u'autoNumaBalancing': 1, u'reservedMem': u'321', u'containers': False, u'bondings': {}, u'software_version': u'4.20', u'supportedENGINEs': [u'3.6', u'4.0', u'4.1', u'4.2'], u'cpuSpeed': u'2527.017', u'numaNodes': {u'1': {u'totalMemory': u'16371', u'cpus': [1, 3, 5, 7, 9, 11, 13, 15]}, u'0': {u'totalMemory': u'16384', u'cpus': [0, 2, 4, 6, 8, 10, 12, 14]}}, u'cpuSockets': u'2', u'vlans': {u'em1.3': {u'iface': u'em1', u'ipv6autoconf': False, u'addr': u'', u'dhcpv6': False, u'ipv6addrs': [], u'vlanid': 3, u'mtu': u'1500', u'dhcpv4': False, u'netmask': u'', u'ipv4defaultroute': False, u'ipv4addrs': [], u'ipv6gateway': u'::', u'gateway': u''}}, u'version_name': u'Snow Man', 'lastClientIface': 'lo', u'cpuCores': u'8', u'hostedEngineDeployed': True, u'hugepages': [2048], u'guestOverhead': u'65', u'additionalFeatures': [u'libgfapi_supported', u'GLUSTER_SNAPSHOT', u'GLUSTER_GEO_REPLICATION', u'GLUSTER_BRICK_MANAGEMENT'], u'kvmEnabled': u'true', u'memSize': u'31996', u'emulatedMachines': [u'pc-i440fx-rhel7.1.0', u'pc-q35-rhel7.3.0', u'rhel6.3.0', u'pc-i440fx-rhel7.5.0', u'pc-i440fx-rhel7.0.0', u'rhel6.1.0', u'pc-i440fx-rhel7.4.0', u'rhel6.6.0', u'pc-q35-rhel7.5.0', u'rhel6.2.0', u'pc', u'pc-i440fx-rhel7.3.0', u'q35', u'pc-i440fx-rhel7.2.0', u'rhel6.4.0', u'pc-q35-rhel7.4.0', u'rhel6.0.0', u'rhel6.5.0'], u'rngSources': [u'hwrng', u'random'], u'operatingSystem': {u'release': u'5.1804.el7.centos.2', u'pretty_name': u'CentOS Linux 7 (Core)', u'version': u'7', u'name': u'RHEL'}}} from=::1,35660 (api:52) 2018-05-30 00:00:23,593-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 3.26 seconds (__init__:573) 2018-05-30 00:00:23,724-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_openstacknet: rc=0 err= (hooks:110) 2018-05-30 00:00:23,919-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/50_vmfex: rc=0 err= (hooks:110) 2018-05-30 00:00:24,213-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py: rc=0 err= (hooks:110) 2018-05-30 00:00:24,392-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_device_create/ovirt_provider_ovn_hook: rc=0 err= (hooks:110) 2018-05-30 00:00:24,401-0700 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,33766 (api:46) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33766 (api:52) 2018-05-30 00:00:24,403-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-05-30 00:00:24,623-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err= (hooks:110) 2018-05-30 00:00:24,836-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110) 2018-05-30 00:00:25,022-0700 INFO (vm/50392390) [root] /usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110) 2018-05-30 00:00:25,023-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') <?xml version="1.0" encoding="utf-8"?><domain type="kvm" xmlns:ns0=" http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <name>HostedEngine</name> <uuid>50392390-4f89-435c-bf3b-8254b58a4ef7</uuid> <memory>8388608</memory> <currentMemory>8388608</currentMemory> <maxMemory slots="16">33554432</maxMemory> <vcpu current="4">16</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">oVirt</entry> <entry name="product">oVirt Node</entry> <entry name="version">7-5.1804.el7.centos.2</entry> <entry name="serial">4C4C4544-005A-4710-8034-B2C04F4C4B31</entry> <entry name="uuid">50392390-4f89-435c-bf3b-8254b58a4ef7</entry> </system> </sysinfo> <clock adjustment="0" offset="variable"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <features> <acpi/> </features> <cpu match="exact"> <model>Nehalem</model> <topology cores="1" sockets="16" threads="1"/> <numa> <cell cpus="0,1,2,3" id="0" memory="8388608"/> </numa> </cpu> <cputune/> <devices> <input bus="usb" type="tablet"/> <channel type="unix"> <target name="ovirt-guest-agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target name="org.qemu.guest_agent.0" type="virtio"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.qemu.guest_agent.0"/> </channel> <controller index="0" ports="16" type="virtio-serial"> <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/> </controller> <rng model="virtio"> <backend model="random">/dev/urandom</backend> </rng> <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"> <listen network="vdsm-ovirtmgmt" type="network"/> </graphics> <controller type="usb"> <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/> </controller> <controller type="ide"> <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci"/> </controller> <video> <model heads="1" type="cirrus" vram="16384"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/> </video> <memballoon model="none"/> <disk device="cdrom" snapshot="no" type="file"> <driver error_policy="report" name="qemu" type="raw"/> <source file="" startupPolicy="optional"/> <target bus="ide" dev="hdc"/> <readonly/> <address bus="1" controller="0" target="0" type="drive" unit="0"/> </disk> <disk device="disk" snapshot="no" type="file"> <target bus="virtio" dev="vda"/> <source file="/var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f"/> <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/> <alias name="ua-3de00d8c-d8b0-4ae0-9363-38a504f5d2b2"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/> <serial>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</serial> </disk> <lease> <key>d0fcd8de-6105-4f33-a674-727e3a11e89f</key> <lockspace>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</lockspace> <target offset="0" path="/rhev/data-center/mnt/glusterSD/192.168.8.11: _engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/3de00d8c-d8b0-4ae0-9363-38a504f5d2b2/d0fcd8de-6105-4f33-a674-727e3a11e89f.lease"/> </lease> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="ovirtmgmt"/> <alias name="ua-9f93c126-cbb3-4c5b-909e-41a53c2e31fb"/> <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/> <mac address="00:16:3e:2d:bd:b1"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <channel type="unix"><target name="org.ovirt.hosted-engine-setup.0" type="virtio"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/50392390-4f89-435c-bf3b-8254b58a4ef7.org.ovirt.hosted-engine-setup.0"/></channel></devices> <pm> <suspend-to-disk enabled="no"/> <suspend-to-mem enabled="no"/> </pm> <os> <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type> <smbios mode="sysinfo"/> </os> <metadata> <ns0:qos/> <ovirt-vm:vm> <minGuaranteedMemoryMb type="int">8192</minGuaranteedMemoryMb> <clusterVersion>4.2</clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="00:16:3e:2d:bd:b1"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
<ovirt-vm:volumeID>d0fcd8de-6105-4f33-a674-727e3a11e89f</ovirt-vm:volumeID> <ovirt-vm:shared>exclusive</ovirt-vm:shared>
<ovirt-vm:imageID>3de00d8c-d8b0-4ae0-9363-38a504f5d2b2</ovirt-vm:imageID>
<ovirt-vm:domainID>c0acdefb-7d16-48ec-9d76-659b8fe33e2a</ovirt-vm:domainID> </ovirt-vm:device> <launchPaused>false</launchPaused> <resumeBehavior>auto_resume</resumeBehavior> </ovirt-vm:vm> </metadata> <on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reboot><on_crash>destroy</on_crash></domain> (vm:2867) 2018-05-30 00:00:25,252-0700 INFO (monitor/c0acdef) [storage.SANLock] Acquiring host id for domain c0acdefb-7d16-48ec-9d76-659b8fe33e2a (id=2, async=True) (clusterlock:284) 2018-05-30 00:00:26,808-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 00:00:26,809-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
------------- The "failed to acquire lock: no space left on device" error has me conserned...Yet there is no device that is full on any servers (espciially the one I'm on):
Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_ovirt-root 8.0G 6.6G 1.5G 82% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 4.0K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/gluster-data 177G 59G 119G 33% /gluster/brick2 /dev/mapper/gluster-engine 25G 18G 7.9G 69% /gluster/brick1 /dev/mapper/vg_thin-iso 25G 7.5G 18G 30% /gluster/brick4 /dev/mapper/vg_thin-data--hdd 1.2T 308G 892G 26% /gluster/brick3 /dev/sda1 497M 313M 185M 63% /boot 192.168.8.11:/engine 25G 20G 5.5G 79% /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine tmpfs 3.2G 0 3.2G 0% /run/user/0 192.168.8.11:/data 136G 60G 77G 44% /rhev/data-center/mnt/glusterSD/192.168.8.11:_data 192.168.8.11:/iso 25G 7.5G 18G 30% /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso 172.172.1.11:/data-hdd 1.2T 308G 892G 26% /rhev/data-center/mnt/glusterSD/172.172.1.11:_data-hdd
-------- --Jim
On Tue, May 29, 2018 at 11:53 PM, Jim Kusznir <jim@palousetech.com> wrote:
I've just gotten back to where hsoted-engine --vm-status does (eventually) return some output other than the ha-agent or storage is down error. It does currently take up to two minutes, and the last run returned stale data.
Gluster-wise, gluster volume heal <volume> info shows the three volumes with all bricks are all showing 0 entries (and return quickly). The 4th volume missing one brick (but still has two replicas) does return entries, and is currently taking a very long time to come back with them.
Making progress toward getting it online again...Don't know why I get stale data in hosted-engine --vm-status or how to overcome that...
--Jim
On Tue, May 29, 2018 at 11:38 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine
------------------------------------------------------------------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso
------------------------------------------------------------------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Well, things went from bad to very, very bad.... > > It appears that during one of the 2 minute lockups, the fencing > agents decided that another node in the cluster was down. As a result, 2 > of the 3 nodes were simultaneously reset with fencing agent reboot. After > the nodes came back up, the engine would not start. All running VMs > (including VMs on the 3rd node that was not rebooted) crashed. > > I've now been working for about 3 hours trying to get the engine to > come up. I don't know why it won't start. hosted-engine --vm-start says > its starting, but it doesn't start (virsh doesn't show any VMs running). > I'm currently running --deploy, as I had run out of options for anything > else I can come up with. I hope this will allow me to re-import all my > existing VMs and allow me to start them back up after everything comes back > up. > > I do have an unverified geo-rep backup; I don't know if it is a good > backup (there were several prior messages to this list, but I didn't get > replies to my questions. It was running in what I believe to be "strange", > and the data directories are larger than their source). > > I'll see if my --deploy works, and if not, I'll be back with another > message/help request. > > When the dust settles and I'm at least minimally functional again, I > really want to understand why all these technologies designed to offer > redundancy conspired to reduce uptime and create failures where there > weren't any otherwise. I thought with hosted engine, 3 ovirt servers and > glusterfs with minimum replica 2+arb or replica 3 should have offered > strong resilience against server failure or disk failure, and should have > prevented / recovered from data corruption. Instead, all of the above > happened (once I get my cluster back up, I still have to try and recover my > webserver VM, which won't boot due to XFS corrupt journal issues created > during the gluster crashes). I think a lot of these issues were rooted > from the upgrade from 4.1 to 4.2. > > --Jim > > On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I also finally found the following in my system log on one server: >> >> [10679.524491] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10679.527150] Call Trace: >> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.527268] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.527279] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [10679.529961] Call Trace: >> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.530046] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.530058] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >> than 120 seconds. >> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >> 1 0x00000080 >> [10679.533738] Call Trace: >> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.533858] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.533873] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.512757] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10919.516677] Call Trace: >> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >> [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 >> [xfs] >> [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 >> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 >> [xfs] >> [10919.516902] [<ffffffffc061b279>] ? >> xfs_trans_read_buf_map+0x199/0x400 [xfs] >> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >> [xfs] >> [10919.517022] [<ffffffffc061b279>] >> xfs_trans_read_buf_map+0x199/0x400 [xfs] >> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >> [xfs] >> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >> [xfs] >> [10919.517126] [<ffffffffc05c9fee>] >> xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] >> [10919.517160] [<ffffffffc05d5a1d>] >> xfs_dir2_node_lookup+0x4d/0x170 [xfs] >> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >> [xfs] >> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] >> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >> [10919.517296] [<ffffffffb94d3753>] ? >> selinux_file_free_security+0x23/0x30 >> [10919.517304] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517309] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517313] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517318] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >> [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10919.517339] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than >> 120 seconds. >> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >> 1 0x00000080 >> [11159.498984] Call Trace: >> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11159.499056] [<ffffffffc05dd9b7>] ? >> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >> [11159.499082] [<ffffffffc05dd43e>] ? >> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >> [11159.499125] [<ffffffffb9394e85>] >> grab_cache_page_write_begin+0x55/0xc0 >> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >> [11159.499149] [<ffffffffb9485621>] >> iomap_file_buffered_write+0xa1/0xe0 >> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499182] [<ffffffffc05f025d>] >> xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] >> [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 >> [xfs] >> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11159.499230] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11159.499238] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than >> 120 seconds. >> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >> 2 0x00000000 >> [11279.491671] Call Trace: >> [11279.491682] [<ffffffffb92a3a2e>] ? >> try_to_del_timer_sync+0x5e/0x90 >> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >> [xfs] >> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] >> [11279.491849] [<ffffffffc0619e80>] ? >> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >> [11279.491913] [<ffffffffc0619e80>] ? >> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >> [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491932] [<ffffffffb9920677>] >> ret_from_fork_nospec_begin+0x21/0x21 >> [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >> than 120 seconds. >> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >> 1 0x00000080 >> [11279.494957] Call Trace: >> [11279.494979] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.495005] [<ffffffffb99145ad>] >> wait_for_completion_io+0xfd/0x140 >> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.495049] [<ffffffffc06064b9>] >> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.495090] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.495102] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [11279.498118] Call Trace: >> [11279.498134] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.498160] [<ffffffffb99145ad>] >> wait_for_completion_io+0xfd/0x140 >> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.498202] [<ffffffffc06064b9>] >> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.498242] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.498254] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than >> 120 seconds. >> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >> 1 0x00000080 >> [11279.501348] Call Trace: >> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >> [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 >> [xfs] >> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11279.501479] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.501489] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than >> 120 seconds. >> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >> 1 0x00000080 >> [11279.504635] Call Trace: >> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.504718] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.504730] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [12127.466494] perf: interrupt took too long (8263 > 8150), >> lowering kernel.perf_event_max_sample_rate to 24000 >> >> -------------------- >> I think this is the cause of the massive ovirt performance issues >> irrespective of gluster volume. At the time this happened, I was also >> ssh'ed into the host, and was doing some rpm querry commands. I had just >> run rpm -qa |grep glusterfs (to verify what version was actually >> installed), and that command took almost 2 minutes to return! Normally it >> takes less than 2 seconds. That is all pure local SSD IO, too.... >> >> I'm no expert, but its my understanding that anytime a software >> causes these kinds of issues, its a serious bug in the software, even if >> its mis-handled exceptions. Is this correct? >> >> --Jim >> >> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I think this is the profile information for one of the volumes >>> that lives on the SSDs and is fully operational with no down/problem disks: >>> >>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 1113 >>> 302 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 88608 >>> 53526 >>> No. of Writes: 522 812340 >>> 76257 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 54351 241901 >>> 15024 >>> No. of Writes: 21636 8656 >>> 8976 >>> >>> Block Size: 131072b+ >>> No. of Reads: 524156 >>> No. of Writes: 296071 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4189 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1257 RELEASEDIR >>> 0.00 46.19 us 12.00 us 187.00 us >>> 69 FLUSH >>> 0.00 147.00 us 78.00 us 367.00 us >>> 86 REMOVEXATTR >>> 0.00 223.46 us 24.00 us 1166.00 us >>> 149 READDIR >>> 0.00 565.34 us 76.00 us 3639.00 us >>> 88 FTRUNCATE >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 98.84 us 2.00 us 880.00 us >>> 1198 OPENDIR >>> 0.00 91.59 us 26.00 us 10371.00 us >>> 3853 STATFS >>> 0.00 494.14 us 17.00 us 193439.00 us >>> 1171 GETXATTR >>> 0.00 299.42 us 35.00 us 9799.00 us >>> 2044 READDIRP >>> 0.00 1965.31 us 110.00 us 382258.00 us >>> 321 XATTROP >>> 0.01 113.40 us 24.00 us 61061.00 us >>> 8134 STAT >>> 0.01 755.38 us 57.00 us 607603.00 us >>> 3196 DISCARD >>> 0.05 2690.09 us 58.00 us 2704761.00 us >>> 3206 OPEN >>> 0.10 119978.25 us 97.00 us 9406684.00 us >>> 154 SETATTR >>> 0.18 101.73 us 28.00 us 700477.00 us >>> 313379 FSTAT >>> 0.23 1059.84 us 25.00 us 2716124.00 us >>> 38255 LOOKUP >>> 0.47 1024.11 us 54.00 us 6197164.00 us >>> 81455 FXATTROP >>> 1.72 2984.00 us 15.00 us 37098954.00 us >>> 103020 FINODELK >>> 5.92 44315.32 us 51.00 us 24731536.00 us >>> 23957 FSYNC >>> 13.27 2399.78 us 25.00 us 22089540.00 us >>> 991005 READ >>> 37.00 5980.43 us 52.00 us 22099889.00 us >>> 1108976 WRITE >>> 41.04 5452.75 us 13.00 us 22102452.00 us >>> 1349053 INODELK >>> >>> Duration: 10026 seconds >>> Data Read: 80046027759 bytes >>> Data Written: 44496632320 bytes >>> >>> Interval 1 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 838 >>> 185 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 85856 >>> 51575 >>> No. of Writes: 382 705802 >>> 57812 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 52673 232093 >>> 14984 >>> No. of Writes: 13499 4908 >>> 4242 >>> >>> Block Size: 131072b+ >>> No. of Reads: 460040 >>> No. of Writes: 6411 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 53.38 us 26.00 us 111.00 us >>> 16 FLUSH >>> 0.00 145.14 us 78.00 us 367.00 us >>> 71 REMOVEXATTR >>> 0.00 190.96 us 114.00 us 298.00 us >>> 71 SETATTR >>> 0.00 213.38 us 24.00 us 1145.00 us >>> 90 READDIR >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 101.76 us 2.00 us 880.00 us >>> 1093 OPENDIR >>> 0.01 93.60 us 27.00 us 10371.00 us >>> 3090 STATFS >>> 0.02 537.47 us 17.00 us 193439.00 us >>> 1038 GETXATTR >>> 0.03 297.44 us 35.00 us 9799.00 us >>> 1990 READDIRP >>> 0.03 2357.28 us 110.00 us 382258.00 us >>> 253 XATTROP >>> 0.04 385.93 us 58.00 us 47593.00 us >>> 2091 OPEN >>> 0.04 114.86 us 24.00 us 61061.00 us >>> 7715 STAT >>> 0.06 444.59 us 57.00 us 333240.00 us >>> 3053 DISCARD >>> 0.42 316.24 us 25.00 us 290728.00 us >>> 29823 LOOKUP >>> 0.73 257.92 us 54.00 us 344812.00 us >>> 63296 FXATTROP >>> 1.37 98.30 us 28.00 us 67621.00 us >>> 313172 FSTAT >>> 1.58 2124.69 us 51.00 us 849200.00 us >>> 16717 FSYNC >>> 5.73 162.46 us 52.00 us 748492.00 us >>> 794079 WRITE >>> 7.19 2065.17 us 16.00 us 37098954.00 us >>> 78381 FINODELK >>> 36.44 886.32 us 25.00 us 2216436.00 us >>> 925421 READ >>> 46.30 1178.04 us 13.00 us 1700704.00 us >>> 884635 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 71250527215 bytes >>> Data Written: 5119903744 bytes >>> >>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 3264419 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 90 FORGET >>> 0.00 0.00 us 0.00 us 0.00 us >>> 9462 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4254 RELEASEDIR >>> 0.00 50.52 us 13.00 us 190.00 us >>> 71 FLUSH >>> 0.00 186.97 us 87.00 us 713.00 us >>> 86 REMOVEXATTR >>> 0.00 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.00 220.98 us 129.00 us 513.00 us >>> 86 SETATTR >>> 0.01 259.30 us 26.00 us 2632.00 us >>> 137 READDIR >>> 0.02 322.76 us 145.00 us 2125.00 us >>> 321 XATTROP >>> 0.03 109.55 us 2.00 us 1258.00 us >>> 1193 OPENDIR >>> 0.05 70.21 us 21.00 us 431.00 us >>> 3196 DISCARD >>> 0.05 169.26 us 21.00 us 2315.00 us >>> 1545 GETXATTR >>> 0.12 176.85 us 63.00 us 2844.00 us >>> 3206 OPEN >>> 0.61 303.49 us 90.00 us 3085.00 us >>> 9633 FSTAT >>> 2.44 305.66 us 28.00 us 3716.00 us >>> 38230 LOOKUP >>> 4.52 266.22 us 55.00 us 53424.00 us >>> 81455 FXATTROP >>> 6.96 1397.99 us 51.00 us 64822.00 us >>> 23889 FSYNC >>> 16.48 84.74 us 25.00 us 6917.00 us >>> 932592 WRITE >>> 30.16 106.90 us 13.00 us 3920189.00 us >>> 1353046 INODELK >>> 38.55 1794.52 us 14.00 us 16210553.00 us >>> 103039 FINODELK >>> >>> Duration: 66562 seconds >>> Data Read: 0 bytes >>> Data Written: 3264419 bytes >>> >>> Interval 1 Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 794080 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 70.31 us 26.00 us 125.00 us >>> 16 FLUSH >>> 0.00 193.10 us 103.00 us 713.00 us >>> 71 REMOVEXATTR >>> 0.01 227.32 us 133.00 us 513.00 us >>> 71 SETATTR >>> 0.01 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.01 259.83 us 35.00 us 1138.00 us >>> 89 READDIR >>> 0.03 318.26 us 145.00 us 2047.00 us >>> 253 XATTROP >>> 0.04 112.67 us 3.00 us 1258.00 us >>> 1093 OPENDIR >>> 0.06 167.98 us 23.00 us 1951.00 us >>> 1014 GETXATTR >>> 0.08 70.97 us 22.00 us 431.00 us >>> 3053 DISCARD >>> 0.13 183.78 us 66.00 us 2844.00 us >>> 2091 OPEN >>> 1.01 303.82 us 90.00 us 3085.00 us >>> 9610 FSTAT >>> 3.27 316.59 us 30.00 us 3716.00 us >>> 29820 LOOKUP >>> 5.83 265.79 us 59.00 us 53424.00 us >>> 63296 FXATTROP >>> 7.95 1373.89 us 51.00 us 64822.00 us >>> 16717 FSYNC >>> 23.17 851.99 us 14.00 us 16210553.00 us >>> 78555 FINODELK >>> 24.04 87.44 us 27.00 us 6917.00 us >>> 794081 WRITE >>> 34.36 111.91 us 14.00 us 984871.00 us >>> 886790 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 0 bytes >>> Data Written: 794080 bytes >>> >>> >>> ----------------------- >>> Here is the data from the volume that is backed by the SHDDs and >>> has one failed disk: >>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>> -------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.74 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2331.74 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.75 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> 12.49 7735.19 us 51.00 us 3595513.00 us >>> 131642 WRITE >>> 17.70 957.08 us 16.00 us 13700466.00 us >>> 1508160 INODELK >>> 24.55 2546.43 us 26.00 us 5077347.00 us >>> 786060 READ >>> 31.56 49699.15 us 47.00 us 3746331.00 us >>> 51777 FSYNC >>> >>> Duration: 10101 seconds >>> Data Read: 101562897361 bytes >>> Data Written: 4834450432 bytes >>> >>> Interval 0 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.73 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2334.57 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.49 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: >>> https://www.ovirt.org/community/about/community-guidelines/ >>> List Archives: >>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJ5J7LHDNM6XXV... >>> >>

[Adding gluster-users back] Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/ Also the gluster client logs - under /var/log/glusterfs/rhev-data-center-mnt-glusterSD<volume>.log On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Thank you for your response. > > I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica > bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is > replica 3, with a brick on all three ovirt machines. > > The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD > (same in all three machines). On ovirt3, the SSHD has reported hard IO > failures, and that brick is offline. However, the other two replicas are > fully operational (although they still show contents in the heal info > command that won't go away, but that may be the case until I replace the > failed disk). > > What is bothering me is that ALL 4 gluster volumes are showing > horrible performance issues. At this point, as the bad disk has been > completely offlined, I would expect gluster to perform at normal speed, but > that is definitely not the case. > > I've also noticed that the performance hits seem to come in waves: > things seem to work acceptably (but slow) for a while, then suddenly, its > as if all disk IO on all volumes (including non-gluster local OS disk > volumes for the hosts) pause for about 30 seconds, then IO resumes again. > During those times, I start getting VM not responding and host not > responding notices as well as the applications having major issues. > > I've shut down most of my VMs and am down to just my essential core > VMs (shedded about 75% of my VMs). I still am experiencing the same issues. > > Am I correct in believing that once the failed disk was brought > offline that performance should return to normal? > > On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> > wrote: > >> I would check disks status and accessibility of mount points where >> your gluster volumes reside. >> >> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> On one ovirt server, I'm now seeing these messages: >>> [56474.239725] blk_update_request: 63 callbacks suppressed >>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>> 3905943424 >>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>> 3905943536 >>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>> 3905945472 >>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>> 3905945584 >>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >>> >>> >>> >>> >>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>> 4.2): >>>> >>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>> database connection failed (No such file or directory) >>>> (appears a lot). >>>> >>>> I also found on the ssh session of that, some sysv warnings about >>>> the backing disk for one of the gluster volumes (straight replica 3). The >>>> glusterfs process for that disk on that machine went offline. Its my >>>> understanding that it should continue to work with the other two machines >>>> while I attempt to replace that disk, right? Attempted writes (touching an >>>> empty file) can take 15 seconds, repeating it later will be much faster. >>>> >>>> Gluster generates a bunch of different log files, I don't know >>>> what ones you want, or from which machine(s). >>>> >>>> How do I do "volume profiling"? >>>> >>>> Thanks! >>>> >>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>>> wrote: >>>> >>>>> Do you see errors reported in the mount logs for the volume? If >>>>> so, could you attach the logs? >>>>> Any issues with your underlying disks. Can you also attach >>>>> output of volume profiling? >>>>> >>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>> well. >>>>>> >>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>> >>>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>> origional post). Is all this storage related? >>>>>> >>>>>> I also have two different gluster volumes for VM storage, and >>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>> time and same way. >>>>>> >>>>>> --Jim >>>>>> >>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>> sabose@redhat.com> wrote: >>>>>> >>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>> >>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Hello: >>>>>>>> >>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>> except for a gluster sync issue that showed up. >>>>>>>> >>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>> >>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>> Status: Connected >>>>>>>> Number of entries: 8 >>>>>>>> >>>>>>>> --------- >>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>> same files that need healing. >>>>>>>> >>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>> entries. It shows 0 for split brain. >>>>>>>> >>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>> >>>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>> >>>>>>>> (the process and PID are often different) >>>>>>>> >>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>> >>>>>>>> Thanks! >>>>>>>> --Jim >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>> y/about/community-guidelines/ >>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>> IRQF/ >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume: [2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 -------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::202::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62::mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engine::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engine::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::202::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data- center-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x9 0 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x 140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I think this is the profile information for one of the volumes that > lives on the SSDs and is fully operational with no down/problem disks: > > [root@ovirt2 yum.repos.d]# gluster volume profile data info > Brick: ovirt2.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 1113 > 302 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 88608 > 53526 > No. of Writes: 522 812340 > 76257 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 54351 241901 > 15024 > No. of Writes: 21636 8656 > 8976 > > Block Size: 131072b+ > No. of Reads: 524156 > No. of Writes: 296071 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 4189 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1257 > RELEASEDIR > 0.00 46.19 us 12.00 us 187.00 us 69 > FLUSH > 0.00 147.00 us 78.00 us 367.00 us 86 > REMOVEXATTR > 0.00 223.46 us 24.00 us 1166.00 us 149 > READDIR > 0.00 565.34 us 76.00 us 3639.00 us 88 > FTRUNCATE > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 98.84 us 2.00 us 880.00 us 1198 > OPENDIR > 0.00 91.59 us 26.00 us 10371.00 us 3853 > STATFS > 0.00 494.14 us 17.00 us 193439.00 us 1171 > GETXATTR > 0.00 299.42 us 35.00 us 9799.00 us 2044 > READDIRP > 0.00 1965.31 us 110.00 us 382258.00 us 321 > XATTROP > 0.01 113.40 us 24.00 us 61061.00 us 8134 > STAT > 0.01 755.38 us 57.00 us 607603.00 us 3196 > DISCARD > 0.05 2690.09 us 58.00 us 2704761.00 us 3206 > OPEN > 0.10 119978.25 us 97.00 us 9406684.00 us 154 > SETATTR > 0.18 101.73 us 28.00 us 700477.00 us 313379 > FSTAT > 0.23 1059.84 us 25.00 us 2716124.00 us 38255 > LOOKUP > 0.47 1024.11 us 54.00 us 6197164.00 us 81455 > FXATTROP > 1.72 2984.00 us 15.00 us 37098954.00 us > 103020 FINODELK > 5.92 44315.32 us 51.00 us 24731536.00 us > 23957 FSYNC > 13.27 2399.78 us 25.00 us 22089540.00 us > 991005 READ > 37.00 5980.43 us 52.00 us 22099889.00 us > 1108976 WRITE > 41.04 5452.75 us 13.00 us 22102452.00 us > 1349053 INODELK > > Duration: 10026 seconds > Data Read: 80046027759 bytes > Data Written: 44496632320 bytes > > Interval 1 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 983 2696 > 1059 > No. of Writes: 0 838 > 185 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 852 85856 > 51575 > No. of Writes: 382 705802 > 57812 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 52673 232093 > 14984 > No. of Writes: 13499 4908 > 4242 > > Block Size: 131072b+ > No. of Reads: 460040 > No. of Writes: 6411 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 53.38 us 26.00 us 111.00 us 16 > FLUSH > 0.00 145.14 us 78.00 us 367.00 us 71 > REMOVEXATTR > 0.00 190.96 us 114.00 us 298.00 us 71 > SETATTR > 0.00 213.38 us 24.00 us 1145.00 us 90 > READDIR > 0.00 263.28 us 20.00 us 28385.00 us 228 > LK > 0.00 101.76 us 2.00 us 880.00 us 1093 > OPENDIR > 0.01 93.60 us 27.00 us 10371.00 us 3090 > STATFS > 0.02 537.47 us 17.00 us 193439.00 us 1038 > GETXATTR > 0.03 297.44 us 35.00 us 9799.00 us 1990 > READDIRP > 0.03 2357.28 us 110.00 us 382258.00 us 253 > XATTROP > 0.04 385.93 us 58.00 us 47593.00 us 2091 > OPEN > 0.04 114.86 us 24.00 us 61061.00 us 7715 > STAT > 0.06 444.59 us 57.00 us 333240.00 us 3053 > DISCARD > 0.42 316.24 us 25.00 us 290728.00 us 29823 > LOOKUP > 0.73 257.92 us 54.00 us 344812.00 us 63296 > FXATTROP > 1.37 98.30 us 28.00 us 67621.00 us 313172 > FSTAT > 1.58 2124.69 us 51.00 us 849200.00 us 16717 > FSYNC > 5.73 162.46 us 52.00 us 748492.00 us 794079 > WRITE > 7.19 2065.17 us 16.00 us 37098954.00 us > 78381 FINODELK > 36.44 886.32 us 25.00 us 2216436.00 us 925421 > READ > 46.30 1178.04 us 13.00 us 1700704.00 us 884635 > INODELK > > Duration: 7485 seconds > Data Read: 71250527215 bytes > Data Written: 5119903744 bytes > > Brick: ovirt3.nwfiber.com:/gluster/brick2/data > ---------------------------------------------- > Cumulative Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 3264419 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 90 > FORGET > 0.00 0.00 us 0.00 us 0.00 us 9462 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 4254 > RELEASEDIR > 0.00 50.52 us 13.00 us 190.00 us 71 > FLUSH > 0.00 186.97 us 87.00 us 713.00 us 86 > REMOVEXATTR > 0.00 79.32 us 33.00 us 189.00 us 228 > LK > 0.00 220.98 us 129.00 us 513.00 us 86 > SETATTR > 0.01 259.30 us 26.00 us 2632.00 us 137 > READDIR > 0.02 322.76 us 145.00 us 2125.00 us 321 > XATTROP > 0.03 109.55 us 2.00 us 1258.00 us 1193 > OPENDIR > 0.05 70.21 us 21.00 us 431.00 us 3196 > DISCARD > 0.05 169.26 us 21.00 us 2315.00 us 1545 > GETXATTR > 0.12 176.85 us 63.00 us 2844.00 us 3206 > OPEN > 0.61 303.49 us 90.00 us 3085.00 us 9633 > FSTAT > 2.44 305.66 us 28.00 us 3716.00 us 38230 > LOOKUP > 4.52 266.22 us 55.00 us 53424.00 us 81455 > FXATTROP > 6.96 1397.99 us 51.00 us 64822.00 us 23889 > FSYNC > 16.48 84.74 us 25.00 us 6917.00 us 932592 > WRITE > 30.16 106.90 us 13.00 us 3920189.00 us 1353046 > INODELK > 38.55 1794.52 us 14.00 us 16210553.00 us > 103039 FINODELK > > Duration: 66562 seconds > Data Read: 0 bytes > Data Written: 3264419 bytes > > Interval 1 Stats: > Block Size: 1b+ > No. of Reads: 0 > No. of Writes: 794080 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2093 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1093 > RELEASEDIR > 0.00 70.31 us 26.00 us 125.00 us 16 > FLUSH > 0.00 193.10 us 103.00 us 713.00 us 71 > REMOVEXATTR > 0.01 227.32 us 133.00 us 513.00 us 71 > SETATTR > 0.01 79.32 us 33.00 us 189.00 us 228 > LK > 0.01 259.83 us 35.00 us 1138.00 us 89 > READDIR > 0.03 318.26 us 145.00 us 2047.00 us 253 > XATTROP > 0.04 112.67 us 3.00 us 1258.00 us 1093 > OPENDIR > 0.06 167.98 us 23.00 us 1951.00 us 1014 > GETXATTR > 0.08 70.97 us 22.00 us 431.00 us 3053 > DISCARD > 0.13 183.78 us 66.00 us 2844.00 us 2091 > OPEN > 1.01 303.82 us 90.00 us 3085.00 us 9610 > FSTAT > 3.27 316.59 us 30.00 us 3716.00 us 29820 > LOOKUP > 5.83 265.79 us 59.00 us 53424.00 us 63296 > FXATTROP > 7.95 1373.89 us 51.00 us 64822.00 us 16717 > FSYNC > 23.17 851.99 us 14.00 us 16210553.00 us > 78555 FINODELK > 24.04 87.44 us 27.00 us 6917.00 us 794081 > WRITE > 34.36 111.91 us 14.00 us 984871.00 us 886790 > INODELK > > Duration: 7485 seconds > Data Read: 0 bytes > Data Written: 794080 bytes > > > ----------------------- > Here is the data from the volume that is backed by the SHDDs and has > one failed disk: > [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info > Brick: 172.172.1.12:/gluster/brick3/data-hdd > -------------------------------------------- > Cumulative Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.74 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2331.74 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.75 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.19 us 51.00 us 3595513.00 us 131642 > WRITE > 17.70 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.55 2546.43 us 26.00 us 5077347.00 us 786060 > READ > 31.56 49699.15 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > Interval 0 Stats: > Block Size: 256b+ 512b+ > 1024b+ > No. of Reads: 1702 86 > 16 > No. of Writes: 0 767 > 71 > > Block Size: 2048b+ 4096b+ > 8192b+ > No. of Reads: 19 51841 > 2049 > No. of Writes: 76 60668 > 35727 > > Block Size: 16384b+ 32768b+ > 65536b+ > No. of Reads: 1744 639 > 1088 > No. of Writes: 8524 2410 > 1285 > > Block Size: 131072b+ > No. of Reads: 771999 > No. of Writes: 29584 > %-latency Avg-latency Min-Latency Max-Latency No. of calls > Fop > --------- ----------- ----------- ----------- ------------ > ---- > 0.00 0.00 us 0.00 us 0.00 us 2902 > RELEASE > 0.00 0.00 us 0.00 us 0.00 us 1517 > RELEASEDIR > 0.00 197.00 us 197.00 us 197.00 us 1 > FTRUNCATE > 0.00 70.24 us 16.00 us 758.00 us 51 > FLUSH > 0.00 143.93 us 82.00 us 305.00 us 57 > REMOVEXATTR > 0.00 178.63 us 105.00 us 712.00 us 60 > SETATTR > 0.00 67.30 us 19.00 us 572.00 us 555 > LK > 0.00 322.80 us 23.00 us 4673.00 us 138 > READDIR > 0.00 336.56 us 106.00 us 11994.00 us 237 > XATTROP > 0.00 84.70 us 28.00 us 1071.00 us 3469 > STATFS > 0.01 387.75 us 2.00 us 146017.00 us 1467 > OPENDIR > 0.01 148.59 us 21.00 us 64374.00 us 4454 > STAT > 0.02 783.02 us 16.00 us 93502.00 us 1902 > GETXATTR > 0.03 1516.10 us 17.00 us 210690.00 us 1364 > ENTRYLK > 0.03 2555.47 us 300.00 us 674454.00 us 1064 > READDIRP > 0.07 85.73 us 19.00 us 68340.00 us 62849 > FSTAT > 0.07 1978.12 us 59.00 us 202596.00 us 2729 > OPEN > 0.22 708.57 us 15.00 us 394799.00 us 25447 > LOOKUP > 5.94 2334.57 us 15.00 us 1099530.00 us 207534 > FINODELK > 7.31 8311.49 us 58.00 us 1800216.00 us 71668 > FXATTROP > 12.49 7735.32 us 51.00 us 3595513.00 us 131642 > WRITE > 17.71 957.08 us 16.00 us 13700466.00 us > 1508160 INODELK > 24.56 2546.42 us 26.00 us 5077347.00 us 786060 > READ > 31.54 49651.63 us 47.00 us 3746331.00 us 51777 > FSYNC > > Duration: 10101 seconds > Data Read: 101562897361 bytes > Data Written: 4834450432 bytes > > > On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Thank you for your response. >> >> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >> replica 3, with a brick on all three ovirt machines. >> >> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >> IO failures, and that brick is offline. However, the other two replicas >> are fully operational (although they still show contents in the heal info >> command that won't go away, but that may be the case until I replace the >> failed disk). >> >> What is bothering me is that ALL 4 gluster volumes are showing >> horrible performance issues. At this point, as the bad disk has been >> completely offlined, I would expect gluster to perform at normal speed, but >> that is definitely not the case. >> >> I've also noticed that the performance hits seem to come in waves: >> things seem to work acceptably (but slow) for a while, then suddenly, its >> as if all disk IO on all volumes (including non-gluster local OS disk >> volumes for the hosts) pause for about 30 seconds, then IO resumes again. >> During those times, I start getting VM not responding and host not >> responding notices as well as the applications having major issues. >> >> I've shut down most of my VMs and am down to just my essential core >> VMs (shedded about 75% of my VMs). I still am experiencing the same issues. >> >> Am I correct in believing that once the failed disk was brought >> offline that performance should return to normal? >> >> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >> wrote: >> >>> I would check disks status and accessibility of mount points where >>> your gluster volumes reside. >>> >>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> On one ovirt server, I'm now seeing these messages: >>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943424 >>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>> 3905943536 >>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945472 >>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>> 3905945584 >>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>> 2048 >>>> >>>> >>>> >>>> >>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>>>> 4.2): >>>>> >>>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>> database connection failed (No such file or directory) >>>>> (appears a lot). >>>>> >>>>> I also found on the ssh session of that, some sysv warnings >>>>> about the backing disk for one of the gluster volumes (straight replica >>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>> my understanding that it should continue to work with the other two >>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>> much faster. >>>>> >>>>> Gluster generates a bunch of different log files, I don't know >>>>> what ones you want, or from which machine(s). >>>>> >>>>> How do I do "volume profiling"? >>>>> >>>>> Thanks! >>>>> >>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> Do you see errors reported in the mount logs for the volume? If >>>>>> so, could you attach the logs? >>>>>> Any issues with your underlying disks. Can you also attach >>>>>> output of volume profiling? >>>>>> >>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>> well. >>>>>>> >>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>> >>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>> origional post). Is all this storage related? >>>>>>> >>>>>>> I also have two different gluster volumes for VM storage, and >>>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>>> time and same way. >>>>>>> >>>>>>> --Jim >>>>>>> >>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>> sabose@redhat.com> wrote: >>>>>>> >>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>> jim@palousetech.com> wrote: >>>>>>>> >>>>>>>>> Hello: >>>>>>>>> >>>>>>>>> I've been having some cluster and gluster performance issues >>>>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>>>> except for a gluster sync issue that showed up. >>>>>>>>> >>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>> >>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>> Status: Connected >>>>>>>>> Number of entries: 8 >>>>>>>>> >>>>>>>>> --------- >>>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>>> same files that need healing. >>>>>>>>> >>>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>>> entries. It shows 0 for split brain. >>>>>>>>> >>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>> >>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>> >>>>>>>>> (the process and PID are often different) >>>>>>>>> >>>>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>>>> cannot move forward with that until my gluster is healed... >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> --Jim >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>> vacy-policy/ >>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>> y/about/community-guidelines/ >>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>>>> IRQF/ >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data: [root@ovirt2 ~]# hosted-engine --vm-status --== Host 1 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False --== Host 2 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969 --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False --------------- I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there: [root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'}) ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there. On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b- a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health:: 202::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_ engine::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_ engine::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health:: 202::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker:: 304::ovirt_hosted_engine_ha.broker.storage_broker. StorageBroker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data-c enter-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I also finally found the following in my system log on one server: > > [10679.524491] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10679.527150] Call Trace: > [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10679.527283] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [10679.529961] Call Trace: > [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than > 120 seconds. > [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 > 0x00000080 > [10679.533738] Call Trace: > [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.512757] INFO: task glusterclogro:14933 blocked for more than > 120 seconds. > [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 > 0x00000080 > [10919.516677] Call Trace: > [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 > [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 > [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 > [xfs] > [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 > [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] > [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 > [xfs] > [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 > [xfs] > [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] > [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 > [xfs] > [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 > [xfs] > [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] > [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 > [xfs] > [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 > [xfs] > [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 > [xfs] > [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] > [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] > [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] > [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 > [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 > [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 > [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 > 3/0x30 > [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 > [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 > [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than > 120 seconds. > [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 > 0x00000080 > [11159.498984] Call Trace: > [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11159.499056] [<ffffffffc05dd9b7>] ? > xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] > [11159.499082] [<ffffffffc05dd43e>] ? > xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] > [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 > [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 > [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 > [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 > [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 > [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 > [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 > [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 > [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x > 55/0xc0 > [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 > [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 > [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 > [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 > /0xe0 > [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 > [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 > [xfs] > [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 > [xfs] > [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than > 120 seconds. > [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 > 0x00000000 > [11279.491671] Call Trace: > [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x9 > 0 > [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] > [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] > [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] > [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 > [xfs] > [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] > [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 > [xfs] > [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 > [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 > 1/0x21 > [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 > [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more > than 120 seconds. > [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 > 0x00000080 > [11279.494957] Call Trace: > [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 > [dm_mod] > [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x > 140 > [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.495105] INFO: task glusterposixfsy:14941 blocked for more > than 120 seconds. > [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 > 0x00000080 > [11279.498118] Call Trace: > [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 > [dm_mod] > [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 > [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 > [dm_mod] > [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 > [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x > 140 > [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 > [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 > [xfs] > [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] > [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 > [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than > 120 seconds. > [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 > 0x00000080 > [11279.501348] Call Trace: > [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 > [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 > [xfs] > [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 > [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 > [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 > [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than > 120 seconds. > [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" > disables this message. > [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 > 0x00000080 > [11279.504635] Call Trace: > [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 > [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 > [xfs] > [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 > [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] > [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 > [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ > 0x160 > [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 > [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 > [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ > 0x160 > [12127.466494] perf: interrupt took too long (8263 > 8150), lowering > kernel.perf_event_max_sample_rate to 24000 > > -------------------- > I think this is the cause of the massive ovirt performance issues > irrespective of gluster volume. At the time this happened, I was also > ssh'ed into the host, and was doing some rpm querry commands. I had just > run rpm -qa |grep glusterfs (to verify what version was actually > installed), and that command took almost 2 minutes to return! Normally it > takes less than 2 seconds. That is all pure local SSD IO, too.... > > I'm no expert, but its my understanding that anytime a software > causes these kinds of issues, its a serious bug in the software, even if > its mis-handled exceptions. Is this correct? > > --Jim > > On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I think this is the profile information for one of the volumes that >> lives on the SSDs and is fully operational with no down/problem disks: >> >> [root@ovirt2 yum.repos.d]# gluster volume profile data info >> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 1113 >> 302 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 88608 >> 53526 >> No. of Writes: 522 812340 >> 76257 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 54351 241901 >> 15024 >> No. of Writes: 21636 8656 >> 8976 >> >> Block Size: 131072b+ >> No. of Reads: 524156 >> No. of Writes: 296071 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 4189 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1257 RELEASEDIR >> 0.00 46.19 us 12.00 us 187.00 us >> 69 FLUSH >> 0.00 147.00 us 78.00 us 367.00 us 86 >> REMOVEXATTR >> 0.00 223.46 us 24.00 us 1166.00 us >> 149 READDIR >> 0.00 565.34 us 76.00 us 3639.00 us >> 88 FTRUNCATE >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 98.84 us 2.00 us 880.00 us >> 1198 OPENDIR >> 0.00 91.59 us 26.00 us 10371.00 us >> 3853 STATFS >> 0.00 494.14 us 17.00 us 193439.00 us >> 1171 GETXATTR >> 0.00 299.42 us 35.00 us 9799.00 us >> 2044 READDIRP >> 0.00 1965.31 us 110.00 us 382258.00 us >> 321 XATTROP >> 0.01 113.40 us 24.00 us 61061.00 us >> 8134 STAT >> 0.01 755.38 us 57.00 us 607603.00 us >> 3196 DISCARD >> 0.05 2690.09 us 58.00 us 2704761.00 us >> 3206 OPEN >> 0.10 119978.25 us 97.00 us 9406684.00 us >> 154 SETATTR >> 0.18 101.73 us 28.00 us 700477.00 us >> 313379 FSTAT >> 0.23 1059.84 us 25.00 us 2716124.00 us >> 38255 LOOKUP >> 0.47 1024.11 us 54.00 us 6197164.00 us >> 81455 FXATTROP >> 1.72 2984.00 us 15.00 us 37098954.00 us >> 103020 FINODELK >> 5.92 44315.32 us 51.00 us 24731536.00 us >> 23957 FSYNC >> 13.27 2399.78 us 25.00 us 22089540.00 us >> 991005 READ >> 37.00 5980.43 us 52.00 us 22099889.00 us >> 1108976 WRITE >> 41.04 5452.75 us 13.00 us 22102452.00 us >> 1349053 INODELK >> >> Duration: 10026 seconds >> Data Read: 80046027759 bytes >> Data Written: 44496632320 bytes >> >> Interval 1 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 983 2696 >> 1059 >> No. of Writes: 0 838 >> 185 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 852 85856 >> 51575 >> No. of Writes: 382 705802 >> 57812 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 52673 232093 >> 14984 >> No. of Writes: 13499 4908 >> 4242 >> >> Block Size: 131072b+ >> No. of Reads: 460040 >> No. of Writes: 6411 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 53.38 us 26.00 us 111.00 us >> 16 FLUSH >> 0.00 145.14 us 78.00 us 367.00 us 71 >> REMOVEXATTR >> 0.00 190.96 us 114.00 us 298.00 us >> 71 SETATTR >> 0.00 213.38 us 24.00 us 1145.00 us >> 90 READDIR >> 0.00 263.28 us 20.00 us 28385.00 us >> 228 LK >> 0.00 101.76 us 2.00 us 880.00 us >> 1093 OPENDIR >> 0.01 93.60 us 27.00 us 10371.00 us >> 3090 STATFS >> 0.02 537.47 us 17.00 us 193439.00 us >> 1038 GETXATTR >> 0.03 297.44 us 35.00 us 9799.00 us >> 1990 READDIRP >> 0.03 2357.28 us 110.00 us 382258.00 us >> 253 XATTROP >> 0.04 385.93 us 58.00 us 47593.00 us >> 2091 OPEN >> 0.04 114.86 us 24.00 us 61061.00 us >> 7715 STAT >> 0.06 444.59 us 57.00 us 333240.00 us >> 3053 DISCARD >> 0.42 316.24 us 25.00 us 290728.00 us >> 29823 LOOKUP >> 0.73 257.92 us 54.00 us 344812.00 us >> 63296 FXATTROP >> 1.37 98.30 us 28.00 us 67621.00 us >> 313172 FSTAT >> 1.58 2124.69 us 51.00 us 849200.00 us >> 16717 FSYNC >> 5.73 162.46 us 52.00 us 748492.00 us >> 794079 WRITE >> 7.19 2065.17 us 16.00 us 37098954.00 us >> 78381 FINODELK >> 36.44 886.32 us 25.00 us 2216436.00 us >> 925421 READ >> 46.30 1178.04 us 13.00 us 1700704.00 us >> 884635 INODELK >> >> Duration: 7485 seconds >> Data Read: 71250527215 bytes >> Data Written: 5119903744 bytes >> >> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >> ---------------------------------------------- >> Cumulative Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 3264419 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 90 FORGET >> 0.00 0.00 us 0.00 us 0.00 us >> 9462 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 4254 RELEASEDIR >> 0.00 50.52 us 13.00 us 190.00 us >> 71 FLUSH >> 0.00 186.97 us 87.00 us 713.00 us 86 >> REMOVEXATTR >> 0.00 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.00 220.98 us 129.00 us 513.00 us >> 86 SETATTR >> 0.01 259.30 us 26.00 us 2632.00 us >> 137 READDIR >> 0.02 322.76 us 145.00 us 2125.00 us >> 321 XATTROP >> 0.03 109.55 us 2.00 us 1258.00 us >> 1193 OPENDIR >> 0.05 70.21 us 21.00 us 431.00 us >> 3196 DISCARD >> 0.05 169.26 us 21.00 us 2315.00 us >> 1545 GETXATTR >> 0.12 176.85 us 63.00 us 2844.00 us >> 3206 OPEN >> 0.61 303.49 us 90.00 us 3085.00 us >> 9633 FSTAT >> 2.44 305.66 us 28.00 us 3716.00 us >> 38230 LOOKUP >> 4.52 266.22 us 55.00 us 53424.00 us >> 81455 FXATTROP >> 6.96 1397.99 us 51.00 us 64822.00 us >> 23889 FSYNC >> 16.48 84.74 us 25.00 us 6917.00 us >> 932592 WRITE >> 30.16 106.90 us 13.00 us 3920189.00 us >> 1353046 INODELK >> 38.55 1794.52 us 14.00 us 16210553.00 us >> 103039 FINODELK >> >> Duration: 66562 seconds >> Data Read: 0 bytes >> Data Written: 3264419 bytes >> >> Interval 1 Stats: >> Block Size: 1b+ >> No. of Reads: 0 >> No. of Writes: 794080 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2093 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1093 RELEASEDIR >> 0.00 70.31 us 26.00 us 125.00 us >> 16 FLUSH >> 0.00 193.10 us 103.00 us 713.00 us 71 >> REMOVEXATTR >> 0.01 227.32 us 133.00 us 513.00 us >> 71 SETATTR >> 0.01 79.32 us 33.00 us 189.00 us >> 228 LK >> 0.01 259.83 us 35.00 us 1138.00 us >> 89 READDIR >> 0.03 318.26 us 145.00 us 2047.00 us >> 253 XATTROP >> 0.04 112.67 us 3.00 us 1258.00 us >> 1093 OPENDIR >> 0.06 167.98 us 23.00 us 1951.00 us >> 1014 GETXATTR >> 0.08 70.97 us 22.00 us 431.00 us >> 3053 DISCARD >> 0.13 183.78 us 66.00 us 2844.00 us >> 2091 OPEN >> 1.01 303.82 us 90.00 us 3085.00 us >> 9610 FSTAT >> 3.27 316.59 us 30.00 us 3716.00 us >> 29820 LOOKUP >> 5.83 265.79 us 59.00 us 53424.00 us >> 63296 FXATTROP >> 7.95 1373.89 us 51.00 us 64822.00 us >> 16717 FSYNC >> 23.17 851.99 us 14.00 us 16210553.00 us >> 78555 FINODELK >> 24.04 87.44 us 27.00 us 6917.00 us >> 794081 WRITE >> 34.36 111.91 us 14.00 us 984871.00 us >> 886790 INODELK >> >> Duration: 7485 seconds >> Data Read: 0 bytes >> Data Written: 794080 bytes >> >> >> ----------------------- >> Here is the data from the volume that is backed by the SHDDs and >> has one failed disk: >> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >> Brick: 172.172.1.12:/gluster/brick3/data-hdd >> -------------------------------------------- >> Cumulative Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.74 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2331.74 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.75 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.19 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.70 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.55 2546.43 us 26.00 us 5077347.00 us >> 786060 READ >> 31.56 49699.15 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> Interval 0 Stats: >> Block Size: 256b+ 512b+ >> 1024b+ >> No. of Reads: 1702 86 >> 16 >> No. of Writes: 0 767 >> 71 >> >> Block Size: 2048b+ 4096b+ >> 8192b+ >> No. of Reads: 19 51841 >> 2049 >> No. of Writes: 76 60668 >> 35727 >> >> Block Size: 16384b+ 32768b+ >> 65536b+ >> No. of Reads: 1744 639 >> 1088 >> No. of Writes: 8524 2410 >> 1285 >> >> Block Size: 131072b+ >> No. of Reads: 771999 >> No. of Writes: 29584 >> %-latency Avg-latency Min-Latency Max-Latency No. of >> calls Fop >> --------- ----------- ----------- ----------- >> ------------ ---- >> 0.00 0.00 us 0.00 us 0.00 us >> 2902 RELEASE >> 0.00 0.00 us 0.00 us 0.00 us >> 1517 RELEASEDIR >> 0.00 197.00 us 197.00 us 197.00 us >> 1 FTRUNCATE >> 0.00 70.24 us 16.00 us 758.00 us >> 51 FLUSH >> 0.00 143.93 us 82.00 us 305.00 us 57 >> REMOVEXATTR >> 0.00 178.63 us 105.00 us 712.00 us >> 60 SETATTR >> 0.00 67.30 us 19.00 us 572.00 us >> 555 LK >> 0.00 322.80 us 23.00 us 4673.00 us >> 138 READDIR >> 0.00 336.56 us 106.00 us 11994.00 us >> 237 XATTROP >> 0.00 84.70 us 28.00 us 1071.00 us >> 3469 STATFS >> 0.01 387.75 us 2.00 us 146017.00 us >> 1467 OPENDIR >> 0.01 148.59 us 21.00 us 64374.00 us >> 4454 STAT >> 0.02 783.02 us 16.00 us 93502.00 us >> 1902 GETXATTR >> 0.03 1516.10 us 17.00 us 210690.00 us >> 1364 ENTRYLK >> 0.03 2555.47 us 300.00 us 674454.00 us >> 1064 READDIRP >> 0.07 85.73 us 19.00 us 68340.00 us >> 62849 FSTAT >> 0.07 1978.12 us 59.00 us 202596.00 us >> 2729 OPEN >> 0.22 708.57 us 15.00 us 394799.00 us >> 25447 LOOKUP >> 5.94 2334.57 us 15.00 us 1099530.00 us >> 207534 FINODELK >> 7.31 8311.49 us 58.00 us 1800216.00 us >> 71668 FXATTROP >> 12.49 7735.32 us 51.00 us 3595513.00 us >> 131642 WRITE >> 17.71 957.08 us 16.00 us 13700466.00 us >> 1508160 INODELK >> 24.56 2546.42 us 26.00 us 5077347.00 us >> 786060 READ >> 31.54 49651.63 us 47.00 us 3746331.00 us >> 51777 FSYNC >> >> Duration: 10101 seconds >> Data Read: 101562897361 bytes >> Data Written: 4834450432 bytes >> >> >> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Thank you for your response. >>> >>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >>> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >>> replica 3, with a brick on all three ovirt machines. >>> >>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >>> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >>> IO failures, and that brick is offline. However, the other two replicas >>> are fully operational (although they still show contents in the heal info >>> command that won't go away, but that may be the case until I replace the >>> failed disk). >>> >>> What is bothering me is that ALL 4 gluster volumes are showing >>> horrible performance issues. At this point, as the bad disk has been >>> completely offlined, I would expect gluster to perform at normal speed, but >>> that is definitely not the case. >>> >>> I've also noticed that the performance hits seem to come in waves: >>> things seem to work acceptably (but slow) for a while, then suddenly, its >>> as if all disk IO on all volumes (including non-gluster local OS disk >>> volumes for the hosts) pause for about 30 seconds, then IO resumes again. >>> During those times, I start getting VM not responding and host not >>> responding notices as well as the applications having major issues. >>> >>> I've shut down most of my VMs and am down to just my essential >>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>> issues. >>> >>> Am I correct in believing that once the failed disk was brought >>> offline that performance should return to normal? >>> >>> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >>> wrote: >>> >>>> I would check disks status and accessibility of mount points >>>> where your gluster volumes reside. >>>> >>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>> wrote: >>>> >>>>> On one ovirt server, I'm now seeing these messages: >>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945472 >>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945584 >>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>>> 2048 >>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905943424 >>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905943536 >>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945472 >>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>>> 3905945584 >>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>>> 2048 >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded >>>>>> to 4.2): >>>>>> >>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>> database connection failed (No such file or directory) >>>>>> (appears a lot). >>>>>> >>>>>> I also found on the ssh session of that, some sysv warnings >>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>> my understanding that it should continue to work with the other two >>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>> much faster. >>>>>> >>>>>> Gluster generates a bunch of different log files, I don't know >>>>>> what ones you want, or from which machine(s). >>>>>> >>>>>> How do I do "volume profiling"? >>>>>> >>>>>> Thanks! >>>>>> >>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>> sabose@redhat.com> wrote: >>>>>> >>>>>>> Do you see errors reported in the mount logs for the volume? >>>>>>> If so, could you attach the logs? >>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>> output of volume profiling? >>>>>>> >>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>>> well. >>>>>>>> >>>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>>> >>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>> origional post). Is all this storage related? >>>>>>>> >>>>>>>> I also have two different gluster volumes for VM storage, and >>>>>>>> only one had the issues, but now VMs in both are being affected at the same >>>>>>>> time and same way. >>>>>>>> >>>>>>>> --Jim >>>>>>>> >>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>> sabose@redhat.com> wrote: >>>>>>>> >>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>> >>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>> >>>>>>>>>> Hello: >>>>>>>>>> >>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>> >>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>>> >>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>>>>> Status: Connected >>>>>>>>>> Number of entries: 8 >>>>>>>>>> >>>>>>>>>> --------- >>>>>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>>>>> same files that need healing. >>>>>>>>>> >>>>>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>>>>> entries. It shows 0 for split brain. >>>>>>>>>> >>>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>> >>>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>> >>>>>>>>>> (the process and PID are often different) >>>>>>>>>> >>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> --Jim >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>> vacy-policy/ >>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Users mailing list -- users@ovirt.org >>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>> y/about/community-guidelines/ >>>>> List Archives: https://lists.ovirt.org/archiv >>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>> S5MF/ >>>>> >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

I tried to capture a consistent log slice accros the cluster from agent.log. Several systems claim they are starting the engine, but virsh never shows any sign of it. They also claim that a uuid that I'm pretty sure is not affiliated with my engine is not found. --------- ovirt1: --------- MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent [root@ovirt1 vdsm]# tail -n 20 ../ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:14:02,154::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-30 00:14:02,173::states::776::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-05-30 00:14:02,174::state_decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-05-30 00:14:02,533::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-05-30 00:14:02,843::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::499::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:18:09,253::hosted_engine::1007::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-05-30 00:18:11,159::hosted_engine::1012::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:18:11,160::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:18:11,160::hosted_engine::1021::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:22:17,692::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:22:17,713::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt2.nwfiber.com (id 2): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=4348 (Wed May 30 00:16:46 2018)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=4349 (Wed May 30 00:16:47 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'ovirt2.nwfiber.com', 'alive': True, 'host-id': 2, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '5ee0a557', 'local_conf_timestamp': 4349, 'host-ts': 4348} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=100036 (Wed May 30 00:16:58 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=100037 (Wed May 30 00:16:58 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineStart\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'e2fadf31', 'local_conf_timestamp': 100037, 'host-ts': 100036} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 1): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 30186.0, 'maintenance': False, 'cpu-load': 0.0308, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:22:17,714::states::508::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:22:18,309::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:22:18,624::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 ---------- ovirt2 ---------- MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::499::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:08:34,545::states::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,627::states::693::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::499::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:12:40,643::states::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,827::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:16:46,635::states::693::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:16:46,636::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:16:46,648::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt1.nwfiber.com (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=7614 (Wed May 30 00:09:54 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=7614 (Wed May 30 00:09:54 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineStarting\nstopped=False\n', 'hostname': 'ovirt1.nwfiber.com', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '1fa13e84', 'local_conf_timestamp': 7614, 'host-ts': 7614} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=99541 (Wed May 30 00:08:43 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=99542 (Wed May 30 00:08:43 2018)\nconf_on_shared_storage=True\nmaintenance=False\nstate=EngineForceStop\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '56f87e2c', 'local_conf_timestamp': 99542, 'host-ts': 99541} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'bridge': True, 'mem-free': 30891.0, 'maintenance': False, 'cpu-load': 0.0186, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:16:46,919::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUnexpectedlyDown-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::499::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:20:52,930::states::508::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:20:53,203::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:20:53,393::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 -------- ovirt3 ------- MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::929::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::935::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::936::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'}) MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::948::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent [root@ovirt3 images]# tail -n 20 /var/log/ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1012::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1013::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:12:51,793::hosted_engine::1021::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:12:52,157::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:16:58,237::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:16:58,256::states::508::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:16:58,595::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:16:58,816::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:21:05,082::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400) MainThread::INFO::2018-05-30 00:21:05,083::hosted_engine::499::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:21:05,125::hosted_engine::968::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM MainThread::INFO::2018-05-30 00:21:05,141::hosted_engine::988::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::980::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::929::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::935::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::936::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'}) MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::948::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent ---------- note that virsh does not show engine started on this machine.... On Wed, May 30, 2018 at 12:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False ---------------
I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there:
[root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there.
On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data-c enter-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
> Well, things went from bad to very, very bad.... > > It appears that during one of the 2 minute lockups, the fencing > agents decided that another node in the cluster was down. As a result, 2 > of the 3 nodes were simultaneously reset with fencing agent reboot. After > the nodes came back up, the engine would not start. All running VMs > (including VMs on the 3rd node that was not rebooted) crashed. > > I've now been working for about 3 hours trying to get the engine to > come up. I don't know why it won't start. hosted-engine --vm-start says > its starting, but it doesn't start (virsh doesn't show any VMs running). > I'm currently running --deploy, as I had run out of options for anything > else I can come up with. I hope this will allow me to re-import all my > existing VMs and allow me to start them back up after everything comes back > up. > > I do have an unverified geo-rep backup; I don't know if it is a good > backup (there were several prior messages to this list, but I didn't get > replies to my questions. It was running in what I believe to be "strange", > and the data directories are larger than their source). > > I'll see if my --deploy works, and if not, I'll be back with another > message/help request. > > When the dust settles and I'm at least minimally functional again, I > really want to understand why all these technologies designed to offer > redundancy conspired to reduce uptime and create failures where there > weren't any otherwise. I thought with hosted engine, 3 ovirt servers and > glusterfs with minimum replica 2+arb or replica 3 should have offered > strong resilience against server failure or disk failure, and should have > prevented / recovered from data corruption. Instead, all of the above > happened (once I get my cluster back up, I still have to try and recover my > webserver VM, which won't boot due to XFS corrupt journal issues created > during the gluster crashes). I think a lot of these issues were rooted > from the upgrade from 4.1 to 4.2. > > --Jim > > On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I also finally found the following in my system log on one server: >> >> [10679.524491] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10679.527150] Call Trace: >> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.527268] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.527279] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [10679.529961] Call Trace: >> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.530046] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.530058] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >> than 120 seconds. >> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >> 1 0x00000080 >> [10679.533738] Call Trace: >> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [10679.533858] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10679.533873] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.512757] INFO: task glusterclogro:14933 blocked for more than >> 120 seconds. >> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >> 1 0x00000080 >> [10919.516677] Call Trace: >> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >> [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 >> [xfs] >> [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 >> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 >> [xfs] >> [10919.516902] [<ffffffffc061b279>] ? >> xfs_trans_read_buf_map+0x199/0x400 [xfs] >> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] >> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >> [xfs] >> [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 >> [xfs] >> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >> [xfs] >> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >> [xfs] >> [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 >> [xfs] >> [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 >> [xfs] >> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >> [xfs] >> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] >> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >> [10919.517296] [<ffffffffb94d3753>] ? >> selinux_file_free_security+0x23/0x30 >> [10919.517304] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517309] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517313] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [10919.517318] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >> [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [10919.517339] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than >> 120 seconds. >> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >> 1 0x00000080 >> [11159.498984] Call Trace: >> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11159.499056] [<ffffffffc05dd9b7>] ? >> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >> [11159.499082] [<ffffffffc05dd43e>] ? >> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >> [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x >> 55/0xc0 >> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >> [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 >> /0xe0 >> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >> [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 >> [xfs] >> [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 >> [xfs] >> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11159.499230] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11159.499238] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than >> 120 seconds. >> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >> 2 0x00000000 >> [11279.491671] Call Trace: >> [11279.491682] [<ffffffffb92a3a2e>] ? >> try_to_del_timer_sync+0x5e/0x90 >> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >> [xfs] >> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] >> [11279.491849] [<ffffffffc0619e80>] ? >> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >> [11279.491913] [<ffffffffc0619e80>] ? >> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >> [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 >> 1/0x21 >> [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 >> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >> than 120 seconds. >> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >> 1 0x00000080 >> [11279.494957] Call Trace: >> [11279.494979] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x >> 140 >> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >> [xfs] >> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.495090] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.495102] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >> than 120 seconds. >> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >> 1 0x00000080 >> [11279.498118] Call Trace: >> [11279.498134] [<ffffffffc0309839>] ? >> __split_and_process_bio+0x2e9/0x520 [dm_mod] >> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >> [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >> [dm_mod] >> [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >> [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x >> 140 >> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >> [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >> [xfs] >> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >> [xfs] >> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.498242] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >> [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.498254] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than >> 120 seconds. >> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >> 1 0x00000080 >> [11279.501348] Call Trace: >> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >> [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 >> [xfs] >> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >> [11279.501479] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.501489] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than >> 120 seconds. >> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >> disables this message. >> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >> 1 0x00000080 >> [11279.504635] Call Trace: >> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >> [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 >> [xfs] >> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >> [xfs] >> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >> [11279.504718] [<ffffffffb992076f>] ? >> system_call_after_swapgs+0xbc/0x160 >> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >> [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >> [11279.504730] [<ffffffffb992077b>] ? >> system_call_after_swapgs+0xc8/0x160 >> [12127.466494] perf: interrupt took too long (8263 > 8150), >> lowering kernel.perf_event_max_sample_rate to 24000 >> >> -------------------- >> I think this is the cause of the massive ovirt performance issues >> irrespective of gluster volume. At the time this happened, I was also >> ssh'ed into the host, and was doing some rpm querry commands. I had just >> run rpm -qa |grep glusterfs (to verify what version was actually >> installed), and that command took almost 2 minutes to return! Normally it >> takes less than 2 seconds. That is all pure local SSD IO, too.... >> >> I'm no expert, but its my understanding that anytime a software >> causes these kinds of issues, its a serious bug in the software, even if >> its mis-handled exceptions. Is this correct? >> >> --Jim >> >> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I think this is the profile information for one of the volumes >>> that lives on the SSDs and is fully operational with no down/problem disks: >>> >>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 1113 >>> 302 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 88608 >>> 53526 >>> No. of Writes: 522 812340 >>> 76257 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 54351 241901 >>> 15024 >>> No. of Writes: 21636 8656 >>> 8976 >>> >>> Block Size: 131072b+ >>> No. of Reads: 524156 >>> No. of Writes: 296071 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4189 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1257 RELEASEDIR >>> 0.00 46.19 us 12.00 us 187.00 us >>> 69 FLUSH >>> 0.00 147.00 us 78.00 us 367.00 us >>> 86 REMOVEXATTR >>> 0.00 223.46 us 24.00 us 1166.00 us >>> 149 READDIR >>> 0.00 565.34 us 76.00 us 3639.00 us >>> 88 FTRUNCATE >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 98.84 us 2.00 us 880.00 us >>> 1198 OPENDIR >>> 0.00 91.59 us 26.00 us 10371.00 us >>> 3853 STATFS >>> 0.00 494.14 us 17.00 us 193439.00 us >>> 1171 GETXATTR >>> 0.00 299.42 us 35.00 us 9799.00 us >>> 2044 READDIRP >>> 0.00 1965.31 us 110.00 us 382258.00 us >>> 321 XATTROP >>> 0.01 113.40 us 24.00 us 61061.00 us >>> 8134 STAT >>> 0.01 755.38 us 57.00 us 607603.00 us >>> 3196 DISCARD >>> 0.05 2690.09 us 58.00 us 2704761.00 us >>> 3206 OPEN >>> 0.10 119978.25 us 97.00 us 9406684.00 us >>> 154 SETATTR >>> 0.18 101.73 us 28.00 us 700477.00 us >>> 313379 FSTAT >>> 0.23 1059.84 us 25.00 us 2716124.00 us >>> 38255 LOOKUP >>> 0.47 1024.11 us 54.00 us 6197164.00 us >>> 81455 FXATTROP >>> 1.72 2984.00 us 15.00 us 37098954.00 us >>> 103020 FINODELK >>> 5.92 44315.32 us 51.00 us 24731536.00 us >>> 23957 FSYNC >>> 13.27 2399.78 us 25.00 us 22089540.00 us >>> 991005 READ >>> 37.00 5980.43 us 52.00 us 22099889.00 us >>> 1108976 WRITE >>> 41.04 5452.75 us 13.00 us 22102452.00 us >>> 1349053 INODELK >>> >>> Duration: 10026 seconds >>> Data Read: 80046027759 bytes >>> Data Written: 44496632320 bytes >>> >>> Interval 1 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 983 2696 >>> 1059 >>> No. of Writes: 0 838 >>> 185 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 852 85856 >>> 51575 >>> No. of Writes: 382 705802 >>> 57812 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 52673 232093 >>> 14984 >>> No. of Writes: 13499 4908 >>> 4242 >>> >>> Block Size: 131072b+ >>> No. of Reads: 460040 >>> No. of Writes: 6411 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 53.38 us 26.00 us 111.00 us >>> 16 FLUSH >>> 0.00 145.14 us 78.00 us 367.00 us >>> 71 REMOVEXATTR >>> 0.00 190.96 us 114.00 us 298.00 us >>> 71 SETATTR >>> 0.00 213.38 us 24.00 us 1145.00 us >>> 90 READDIR >>> 0.00 263.28 us 20.00 us 28385.00 us >>> 228 LK >>> 0.00 101.76 us 2.00 us 880.00 us >>> 1093 OPENDIR >>> 0.01 93.60 us 27.00 us 10371.00 us >>> 3090 STATFS >>> 0.02 537.47 us 17.00 us 193439.00 us >>> 1038 GETXATTR >>> 0.03 297.44 us 35.00 us 9799.00 us >>> 1990 READDIRP >>> 0.03 2357.28 us 110.00 us 382258.00 us >>> 253 XATTROP >>> 0.04 385.93 us 58.00 us 47593.00 us >>> 2091 OPEN >>> 0.04 114.86 us 24.00 us 61061.00 us >>> 7715 STAT >>> 0.06 444.59 us 57.00 us 333240.00 us >>> 3053 DISCARD >>> 0.42 316.24 us 25.00 us 290728.00 us >>> 29823 LOOKUP >>> 0.73 257.92 us 54.00 us 344812.00 us >>> 63296 FXATTROP >>> 1.37 98.30 us 28.00 us 67621.00 us >>> 313172 FSTAT >>> 1.58 2124.69 us 51.00 us 849200.00 us >>> 16717 FSYNC >>> 5.73 162.46 us 52.00 us 748492.00 us >>> 794079 WRITE >>> 7.19 2065.17 us 16.00 us 37098954.00 us >>> 78381 FINODELK >>> 36.44 886.32 us 25.00 us 2216436.00 us >>> 925421 READ >>> 46.30 1178.04 us 13.00 us 1700704.00 us >>> 884635 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 71250527215 bytes >>> Data Written: 5119903744 bytes >>> >>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>> ---------------------------------------------- >>> Cumulative Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 3264419 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 90 FORGET >>> 0.00 0.00 us 0.00 us 0.00 us >>> 9462 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 4254 RELEASEDIR >>> 0.00 50.52 us 13.00 us 190.00 us >>> 71 FLUSH >>> 0.00 186.97 us 87.00 us 713.00 us >>> 86 REMOVEXATTR >>> 0.00 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.00 220.98 us 129.00 us 513.00 us >>> 86 SETATTR >>> 0.01 259.30 us 26.00 us 2632.00 us >>> 137 READDIR >>> 0.02 322.76 us 145.00 us 2125.00 us >>> 321 XATTROP >>> 0.03 109.55 us 2.00 us 1258.00 us >>> 1193 OPENDIR >>> 0.05 70.21 us 21.00 us 431.00 us >>> 3196 DISCARD >>> 0.05 169.26 us 21.00 us 2315.00 us >>> 1545 GETXATTR >>> 0.12 176.85 us 63.00 us 2844.00 us >>> 3206 OPEN >>> 0.61 303.49 us 90.00 us 3085.00 us >>> 9633 FSTAT >>> 2.44 305.66 us 28.00 us 3716.00 us >>> 38230 LOOKUP >>> 4.52 266.22 us 55.00 us 53424.00 us >>> 81455 FXATTROP >>> 6.96 1397.99 us 51.00 us 64822.00 us >>> 23889 FSYNC >>> 16.48 84.74 us 25.00 us 6917.00 us >>> 932592 WRITE >>> 30.16 106.90 us 13.00 us 3920189.00 us >>> 1353046 INODELK >>> 38.55 1794.52 us 14.00 us 16210553.00 us >>> 103039 FINODELK >>> >>> Duration: 66562 seconds >>> Data Read: 0 bytes >>> Data Written: 3264419 bytes >>> >>> Interval 1 Stats: >>> Block Size: 1b+ >>> No. of Reads: 0 >>> No. of Writes: 794080 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2093 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1093 RELEASEDIR >>> 0.00 70.31 us 26.00 us 125.00 us >>> 16 FLUSH >>> 0.00 193.10 us 103.00 us 713.00 us >>> 71 REMOVEXATTR >>> 0.01 227.32 us 133.00 us 513.00 us >>> 71 SETATTR >>> 0.01 79.32 us 33.00 us 189.00 us >>> 228 LK >>> 0.01 259.83 us 35.00 us 1138.00 us >>> 89 READDIR >>> 0.03 318.26 us 145.00 us 2047.00 us >>> 253 XATTROP >>> 0.04 112.67 us 3.00 us 1258.00 us >>> 1093 OPENDIR >>> 0.06 167.98 us 23.00 us 1951.00 us >>> 1014 GETXATTR >>> 0.08 70.97 us 22.00 us 431.00 us >>> 3053 DISCARD >>> 0.13 183.78 us 66.00 us 2844.00 us >>> 2091 OPEN >>> 1.01 303.82 us 90.00 us 3085.00 us >>> 9610 FSTAT >>> 3.27 316.59 us 30.00 us 3716.00 us >>> 29820 LOOKUP >>> 5.83 265.79 us 59.00 us 53424.00 us >>> 63296 FXATTROP >>> 7.95 1373.89 us 51.00 us 64822.00 us >>> 16717 FSYNC >>> 23.17 851.99 us 14.00 us 16210553.00 us >>> 78555 FINODELK >>> 24.04 87.44 us 27.00 us 6917.00 us >>> 794081 WRITE >>> 34.36 111.91 us 14.00 us 984871.00 us >>> 886790 INODELK >>> >>> Duration: 7485 seconds >>> Data Read: 0 bytes >>> Data Written: 794080 bytes >>> >>> >>> ----------------------- >>> Here is the data from the volume that is backed by the SHDDs and >>> has one failed disk: >>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>> -------------------------------------------- >>> Cumulative Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.74 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2331.74 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.75 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> 12.49 7735.19 us 51.00 us 3595513.00 us >>> 131642 WRITE >>> 17.70 957.08 us 16.00 us 13700466.00 us >>> 1508160 INODELK >>> 24.55 2546.43 us 26.00 us 5077347.00 us >>> 786060 READ >>> 31.56 49699.15 us 47.00 us 3746331.00 us >>> 51777 FSYNC >>> >>> Duration: 10101 seconds >>> Data Read: 101562897361 bytes >>> Data Written: 4834450432 bytes >>> >>> Interval 0 Stats: >>> Block Size: 256b+ 512b+ >>> 1024b+ >>> No. of Reads: 1702 86 >>> 16 >>> No. of Writes: 0 767 >>> 71 >>> >>> Block Size: 2048b+ 4096b+ >>> 8192b+ >>> No. of Reads: 19 51841 >>> 2049 >>> No. of Writes: 76 60668 >>> 35727 >>> >>> Block Size: 16384b+ 32768b+ >>> 65536b+ >>> No. of Reads: 1744 639 >>> 1088 >>> No. of Writes: 8524 2410 >>> 1285 >>> >>> Block Size: 131072b+ >>> No. of Reads: 771999 >>> No. of Writes: 29584 >>> %-latency Avg-latency Min-Latency Max-Latency No. of >>> calls Fop >>> --------- ----------- ----------- ----------- >>> ------------ ---- >>> 0.00 0.00 us 0.00 us 0.00 us >>> 2902 RELEASE >>> 0.00 0.00 us 0.00 us 0.00 us >>> 1517 RELEASEDIR >>> 0.00 197.00 us 197.00 us 197.00 us >>> 1 FTRUNCATE >>> 0.00 70.24 us 16.00 us 758.00 us >>> 51 FLUSH >>> 0.00 143.93 us 82.00 us 305.00 us >>> 57 REMOVEXATTR >>> 0.00 178.63 us 105.00 us 712.00 us >>> 60 SETATTR >>> 0.00 67.30 us 19.00 us 572.00 us >>> 555 LK >>> 0.00 322.80 us 23.00 us 4673.00 us >>> 138 READDIR >>> 0.00 336.56 us 106.00 us 11994.00 us >>> 237 XATTROP >>> 0.00 84.70 us 28.00 us 1071.00 us >>> 3469 STATFS >>> 0.01 387.75 us 2.00 us 146017.00 us >>> 1467 OPENDIR >>> 0.01 148.59 us 21.00 us 64374.00 us >>> 4454 STAT >>> 0.02 783.02 us 16.00 us 93502.00 us >>> 1902 GETXATTR >>> 0.03 1516.10 us 17.00 us 210690.00 us >>> 1364 ENTRYLK >>> 0.03 2555.47 us 300.00 us 674454.00 us >>> 1064 READDIRP >>> 0.07 85.73 us 19.00 us 68340.00 us >>> 62849 FSTAT >>> 0.07 1978.12 us 59.00 us 202596.00 us >>> 2729 OPEN >>> 0.22 708.57 us 15.00 us 394799.00 us >>> 25447 LOOKUP >>> 5.94 2334.57 us 15.00 us 1099530.00 us >>> 207534 FINODELK >>> 7.31 8311.49 us 58.00 us 1800216.00 us >>> 71668 FXATTROP >>> 12.49 7735.32 us 51.00 us 3595513.00 us >>> 131642 WRITE >>> 17.71 957.08 us 16.00 us 13700466.00 us >>> 1508160 INODELK >>> 24.56 2546.42 us 26.00 us 5077347.00 us >>> 786060 READ >>> 31.54 49651.63 us 47.00 us 3746331.00 us >>> 51777 FSYNC >>> >>> Duration: 10101 seconds >>> Data Read: 101562897361 bytes >>> Data Written: 4834450432 bytes >>> >>> >>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> Thank you for your response. >>>> >>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica >>>> bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is >>>> replica 3, with a brick on all three ovirt machines. >>>> >>>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >>>> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >>>> IO failures, and that brick is offline. However, the other two replicas >>>> are fully operational (although they still show contents in the heal info >>>> command that won't go away, but that may be the case until I replace the >>>> failed disk). >>>> >>>> What is bothering me is that ALL 4 gluster volumes are showing >>>> horrible performance issues. At this point, as the bad disk has been >>>> completely offlined, I would expect gluster to perform at normal speed, but >>>> that is definitely not the case. >>>> >>>> I've also noticed that the performance hits seem to come in >>>> waves: things seem to work acceptably (but slow) for a while, then >>>> suddenly, its as if all disk IO on all volumes (including non-gluster local >>>> OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes >>>> again. During those times, I start getting VM not responding and host not >>>> responding notices as well as the applications having major issues. >>>> >>>> I've shut down most of my VMs and am down to just my essential >>>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>>> issues. >>>> >>>> Am I correct in believing that once the failed disk was brought >>>> offline that performance should return to normal? >>>> >>>> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> >>>> wrote: >>>> >>>>> I would check disks status and accessibility of mount points >>>>> where your gluster volumes reside. >>>>> >>>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>>> wrote: >>>>> >>>>>> On one ovirt server, I'm now seeing these messages: >>>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >>>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905945472 >>>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905945584 >>>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>>>> 2048 >>>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905943424 >>>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905943536 >>>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >>>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905945472 >>>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>>>> 3905945584 >>>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>>>> 2048 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded >>>>>>> to 4.2): >>>>>>> >>>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: >>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>> database connection failed (No such file or directory) >>>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: >>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>> database connection failed (No such file or directory) >>>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: >>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>> database connection failed (No such file or directory) >>>>>>> (appears a lot). >>>>>>> >>>>>>> I also found on the ssh session of that, some sysv warnings >>>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>>> my understanding that it should continue to work with the other two >>>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>>> much faster. >>>>>>> >>>>>>> Gluster generates a bunch of different log files, I don't know >>>>>>> what ones you want, or from which machine(s). >>>>>>> >>>>>>> How do I do "volume profiling"? >>>>>>> >>>>>>> Thanks! >>>>>>> >>>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>>> sabose@redhat.com> wrote: >>>>>>> >>>>>>>> Do you see errors reported in the mount logs for the volume? >>>>>>>> If so, could you attach the logs? >>>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>>> output of volume profiling? >>>>>>>> >>>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>>> jim@palousetech.com> wrote: >>>>>>>> >>>>>>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>>>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>>>>>> due to storage issues, and most of the remaining VMs are not performing >>>>>>>>> well. >>>>>>>>> >>>>>>>>> At this point, I am in full EMERGENCY mode, as my production >>>>>>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>>>>>> >>>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>>> origional post). Is all this storage related? >>>>>>>>> >>>>>>>>> I also have two different gluster volumes for VM storage, >>>>>>>>> and only one had the issues, but now VMs in both are being affected at the >>>>>>>>> same time and same way. >>>>>>>>> >>>>>>>>> --Jim >>>>>>>>> >>>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>>> >>>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hello: >>>>>>>>>>> >>>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>>> >>>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded >>>>>>>>>>> the hosted engine first, then engine 3. When engine 3 came back up, for >>>>>>>>>>> some reason one of my gluster volumes would not sync. Here's sample output: >>>>>>>>>>> >>>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>> Status: Connected >>>>>>>>>>> Number of entries: 8 >>>>>>>>>>> >>>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>> Status: Connected >>>>>>>>>>> Number of entries: 8 >>>>>>>>>>> >>>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>> Status: Connected >>>>>>>>>>> Number of entries: 8 >>>>>>>>>>> >>>>>>>>>>> --------- >>>>>>>>>>> Its been in this state for a couple days now, and >>>>>>>>>>> bandwidth monitoring shows no appreciable data moving. I've tried >>>>>>>>>>> repeatedly commanding a full heal from all three clusters in the node. Its >>>>>>>>>>> always the same files that need healing. >>>>>>>>>>> >>>>>>>>>>> When running gluster volume heal data-hdd statistics, I >>>>>>>>>>> see sometimes different information, but always some number of "heal >>>>>>>>>>> failed" entries. It shows 0 for split brain. >>>>>>>>>>> >>>>>>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>>> >>>>>>>>>>> Second issue: I've been experiencing VERY POOR performance >>>>>>>>>>> on most of my VMs. To the tune that logging into a windows 10 vm via >>>>>>>>>>> remote desktop can take 5 minutes, launching quickbooks inside said vm can >>>>>>>>>>> easily take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>>> >>>>>>>>>>> (the process and PID are often different) >>>>>>>>>>> >>>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>>> >>>>>>>>>>> Thanks! >>>>>>>>>>> --Jim >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>>> vacy-policy/ >>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list -- users@ovirt.org >>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>> y/about/community-guidelines/ >>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>>> S5MF/ >>>>>> >>>>> >>>> >>> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

So, right now I think this message from vdsm.log captures my blocking problem: 2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={u'c0acdefb-7d16-48ec-9d76-659b8fe33e2a': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.00142076', 'lastCheck': '1.1', 'valid': True}} from=::1,60684, task_id=7da641b6-9a98-4db7-aafb-04ff84228cf5 (api:52) 2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-05-30 01:27:23,411-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 01:27:23,411-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683) The strange thing is I can't figure out what device has no space left. The only hits on google talk about deleting __DIRECT_IO_TEST__ file in the storage repo, which I tried, but it did not help. Perhaps one of the files in the engine volume stores locks, and that is full/corrupt? --Jim On Wed, May 30, 2018 at 12:25 AM, Jim Kusznir <jim@palousetech.com> wrote:
I tried to capture a consistent log slice accros the cluster from agent.log. Several systems claim they are starting the engine, but virsh never shows any sign of it. They also claim that a uuid that I'm pretty sure is not affiliated with my engine is not found.
--------- ovirt1: --------- MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent [root@ovirt1 vdsm]# tail -n 20 ../ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:14:02,154::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-30 00:14:02,173::states::776:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-05-30 00:14:02,174::state_ decorators::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-05-30 00:14:02,533::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-05-30 00:14:02,843::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8- 49e73e6ba929 MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine:: 499::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:18:09,253::hosted_engine:: 1007::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-05-30 00:18:11,159::hosted_engine:: 1012::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:18:11,160::hosted_engine:: 1013::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:18:11,160::hosted_engine:: 1021::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:22:17,692::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:22:17,713::state_machine:: 169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:22:17,713::state_machine:: 174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt2.nwfiber.com (id 2): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=4348 (Wed May 30 00:16:46 2018)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=4349 (Wed May 30 00:16:47 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': ' ovirt2.nwfiber.com', 'alive': True, 'host-id': 2, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '5ee0a557', 'local_conf_timestamp': 4349, 'host-ts': 4348} MainThread::INFO::2018-05-30 00:22:17,713::state_machine:: 174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=100036 (Wed May 30 00:16:58 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=100037 (Wed May 30 00:16:58 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStart\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'e2fadf31', 'local_conf_timestamp': 100037, 'host-ts': 100036} MainThread::INFO::2018-05-30 00:22:17,713::state_machine:: 177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 1): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 30186.0, 'maintenance': False, 'cpu-load': 0.0308, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:22:17,714::states::508:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:22:18,309::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:22:18,624::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8- 49e73e6ba929
---------- ovirt2 ---------- MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine:: 499::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:08:34,545::states::678:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,627::states::693:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine:: 499::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:12:40,643::states::678:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,827::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8- 49e73e6ba929 MainThread::INFO::2018-05-30 00:16:46,635::states::693:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:16:46,636::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:16:46,648::state_machine:: 169::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:16:46,648::state_machine:: 174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt1.nwfiber.com (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=7614 (Wed May 30 00:09:54 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=7614 (Wed May 30 00:09:54 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStarting\nstopped=False\n', 'hostname': 'ovirt1.nwfiber.com', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '1fa13e84', 'local_conf_timestamp': 7614, 'host-ts': 7614} MainThread::INFO::2018-05-30 00:16:46,648::state_machine:: 174::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=99541 (Wed May 30 00:08:43 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=99542 (Wed May 30 00:08:43 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineForceStop\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '56f87e2c', 'local_conf_timestamp': 99542, 'host-ts': 99541} MainThread::INFO::2018-05-30 00:16:46,648::state_machine:: 177::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'bridge': True, 'mem-free': 30891.0, 'maintenance': False, 'cpu-load': 0.0186, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:16:46,919::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUnexpectedlyDown-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine:: 499::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:20:52,930::states::508:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:20:53,203::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:20:53,393::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8- 49e73e6ba929
-------- ovirt3 ------- MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine:: 929::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine:: 935::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine:: 936::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine:: 948::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent [root@ovirt3 images]# tail -n 20 /var/log/ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine:: 1012::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine:: 1013::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:12:51,793::hosted_engine:: 1021::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:12:52,157::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:16:58,237::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:16:58,256::states::508:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:16:58,595::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:16:58,816::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb-7d16-48ec-9d76- 659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8- 49e73e6ba929 MainThread::INFO::2018-05-30 00:21:05,082::hosted_engine:: 491::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400) MainThread::INFO::2018-05-30 00:21:05,083::hosted_engine:: 499::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:21:05,125::hosted_engine:: 968::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM MainThread::INFO::2018-05-30 00:21:05,141::hosted_engine:: 988::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine:: 980::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine:: 929::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine:: 935::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine:: 936::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine:: 948::ovirt_hosted_engine_ha.agent.hosted_engine. HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent
---------- note that virsh does not show engine started on this machine....
On Wed, May 30, 2018 at 12:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False ---------------
I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there:
[root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there.
On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data-c enter-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> wrote:
> hosted-engine --deploy failed (would not come up on my existing > gluster storage). However, I realized no changes were written to my > existing storage. So, I went back to trying to get my old engine running. > > hosted-engine --vm-status is now taking a very long time (5+minutes) > to return, and it returns stail information everywhere. I thought perhaps > the lockspace is corrupt, so tried to clean that and metadata, but both are > failing (--cleam-metadata has hung and I can't even ctrl-c out of it). > > How can I reinitialize all the lockspace/metadata safely? There is > no engine or VMs running currently.... >
I think the first thing to make sure is that your storage is up and running. So can you mount the gluster volumes and able to access the contents there? Please provide the gluster volume info and gluster volume status of the volumes that you're using.
> --Jim > > On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> Well, things went from bad to very, very bad.... >> >> It appears that during one of the 2 minute lockups, the fencing >> agents decided that another node in the cluster was down. As a result, 2 >> of the 3 nodes were simultaneously reset with fencing agent reboot. After >> the nodes came back up, the engine would not start. All running VMs >> (including VMs on the 3rd node that was not rebooted) crashed. >> >> I've now been working for about 3 hours trying to get the engine to >> come up. I don't know why it won't start. hosted-engine --vm-start says >> its starting, but it doesn't start (virsh doesn't show any VMs running). >> I'm currently running --deploy, as I had run out of options for anything >> else I can come up with. I hope this will allow me to re-import all my >> existing VMs and allow me to start them back up after everything comes back >> up. >> >> I do have an unverified geo-rep backup; I don't know if it is a >> good backup (there were several prior messages to this list, but I didn't >> get replies to my questions. It was running in what I believe to be >> "strange", and the data directories are larger than their source). >> >> I'll see if my --deploy works, and if not, I'll be back with >> another message/help request. >> >> When the dust settles and I'm at least minimally functional again, >> I really want to understand why all these technologies designed to offer >> redundancy conspired to reduce uptime and create failures where there >> weren't any otherwise. I thought with hosted engine, 3 ovirt servers and >> glusterfs with minimum replica 2+arb or replica 3 should have offered >> strong resilience against server failure or disk failure, and should have >> prevented / recovered from data corruption. Instead, all of the above >> happened (once I get my cluster back up, I still have to try and recover my >> webserver VM, which won't boot due to XFS corrupt journal issues created >> during the gluster crashes). I think a lot of these issues were rooted >> from the upgrade from 4.1 to 4.2. >> >> --Jim >> >> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I also finally found the following in my system log on one server: >>> >>> [10679.524491] INFO: task glusterclogro:14933 blocked for more >>> than 120 seconds. >>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >>> 1 0x00000080 >>> [10679.527150] Call Trace: >>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [10679.527218] [<ffffffffc060e388>] >>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>> [xfs] >>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [10679.527268] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>> [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [10679.527279] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >>> than 120 seconds. >>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >>> 1 0x00000080 >>> [10679.529961] Call Trace: >>> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [10679.530003] [<ffffffffc060e388>] >>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>> [xfs] >>> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [10679.530046] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>> [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [10679.530058] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >>> than 120 seconds. >>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >>> 1 0x00000080 >>> [10679.533738] Call Trace: >>> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [10679.533799] [<ffffffffc060e388>] >>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>> [xfs] >>> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [10679.533858] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>> [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [10679.533873] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [10919.512757] INFO: task glusterclogro:14933 blocked for more >>> than 120 seconds. >>> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >>> 1 0x00000080 >>> [10919.516677] Call Trace: >>> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >>> [10919.516768] [<ffffffffc05e9224>] ? >>> _xfs_buf_ioapply+0x334/0x460 [xfs] >>> [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 >>> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 >>> [xfs] >>> [10919.516859] [<ffffffffc05eafa9>] >>> xfs_buf_submit_wait+0xf9/0x1d0 [xfs] >>> [10919.516902] [<ffffffffc061b279>] ? >>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] >>> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >>> [xfs] >>> [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 >>> [xfs] >>> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >>> [xfs] >>> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >>> [xfs] >>> [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 >>> [xfs] >>> [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 >>> [xfs] >>> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >>> [xfs] >>> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >>> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] >>> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >>> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >>> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >>> [10919.517296] [<ffffffffb94d3753>] ? >>> selinux_file_free_security+0x23/0x30 >>> [10919.517304] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [10919.517309] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [10919.517313] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [10919.517318] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >>> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >>> [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [10919.517339] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more >>> than 120 seconds. >>> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >>> 1 0x00000080 >>> [11159.498984] Call Trace: >>> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>> [11159.499056] [<ffffffffc05dd9b7>] ? >>> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >>> [11159.499082] [<ffffffffc05dd43e>] ? >>> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >>> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >>> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>> [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>> [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x >>> 55/0xc0 >>> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >>> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >>> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >>> [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 >>> /0xe0 >>> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>> [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 >>> [xfs] >>> [11159.499213] [<ffffffffc05f057d>] >>> xfs_file_aio_write+0x18d/0x1b0 [xfs] >>> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>> [11159.499230] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [11159.499238] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than >>> 120 seconds. >>> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >>> 2 0x00000000 >>> [11279.491671] Call Trace: >>> [11279.491682] [<ffffffffb92a3a2e>] ? >>> try_to_del_timer_sync+0x5e/0x90 >>> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >>> [xfs] >>> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >>> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] >>> [11279.491849] [<ffffffffc0619e80>] ? >>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >>> [11279.491913] [<ffffffffc0619e80>] ? >>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >>> [11279.491926] [<ffffffffb92bb090>] ? >>> insert_kthread_work+0x40/0x40 >>> [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 >>> 1/0x21 >>> [11279.491936] [<ffffffffb92bb090>] ? >>> insert_kthread_work+0x40/0x40 >>> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >>> than 120 seconds. >>> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >>> 1 0x00000080 >>> [11279.494957] Call Trace: >>> [11279.494979] [<ffffffffc0309839>] ? >>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>> [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >>> [dm_mod] >>> [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>> [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x >>> 140 >>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>> [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >>> [xfs] >>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>> [xfs] >>> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [11279.495090] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>> [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [11279.495102] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >>> than 120 seconds. >>> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >>> 1 0x00000080 >>> [11279.498118] Call Trace: >>> [11279.498134] [<ffffffffc0309839>] ? >>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>> [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 >>> [dm_mod] >>> [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 >>> [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x >>> 140 >>> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>> [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 >>> [xfs] >>> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>> [xfs] >>> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [11279.498242] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>> [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [11279.498254] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more >>> than 120 seconds. >>> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >>> 1 0x00000080 >>> [11279.501348] Call Trace: >>> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11279.501390] [<ffffffffc060e388>] >>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>> [xfs] >>> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >>> [11279.501461] [<ffffffffc05f0545>] >>> xfs_file_aio_write+0x155/0x1b0 [xfs] >>> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>> [11279.501479] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [11279.501489] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more >>> than 120 seconds. >>> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>> disables this message. >>> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >>> 1 0x00000080 >>> [11279.504635] Call Trace: >>> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >>> [11279.504676] [<ffffffffc060e388>] >>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>> [xfs] >>> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>> [11279.504718] [<ffffffffb992076f>] ? >>> system_call_after_swapgs+0xbc/0x160 >>> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>> [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 >>> [11279.504730] [<ffffffffb992077b>] ? >>> system_call_after_swapgs+0xc8/0x160 >>> [12127.466494] perf: interrupt took too long (8263 > 8150), >>> lowering kernel.perf_event_max_sample_rate to 24000 >>> >>> -------------------- >>> I think this is the cause of the massive ovirt performance issues >>> irrespective of gluster volume. At the time this happened, I was also >>> ssh'ed into the host, and was doing some rpm querry commands. I had just >>> run rpm -qa |grep glusterfs (to verify what version was actually >>> installed), and that command took almost 2 minutes to return! Normally it >>> takes less than 2 seconds. That is all pure local SSD IO, too.... >>> >>> I'm no expert, but its my understanding that anytime a software >>> causes these kinds of issues, its a serious bug in the software, even if >>> its mis-handled exceptions. Is this correct? >>> >>> --Jim >>> >>> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> I think this is the profile information for one of the volumes >>>> that lives on the SSDs and is fully operational with no down/problem disks: >>>> >>>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>>> ---------------------------------------------- >>>> Cumulative Stats: >>>> Block Size: 256b+ 512b+ >>>> 1024b+ >>>> No. of Reads: 983 2696 >>>> 1059 >>>> No. of Writes: 0 1113 >>>> 302 >>>> >>>> Block Size: 2048b+ 4096b+ >>>> 8192b+ >>>> No. of Reads: 852 88608 >>>> 53526 >>>> No. of Writes: 522 812340 >>>> 76257 >>>> >>>> Block Size: 16384b+ 32768b+ >>>> 65536b+ >>>> No. of Reads: 54351 241901 >>>> 15024 >>>> No. of Writes: 21636 8656 >>>> 8976 >>>> >>>> Block Size: 131072b+ >>>> No. of Reads: 524156 >>>> No. of Writes: 296071 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 4189 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 1257 RELEASEDIR >>>> 0.00 46.19 us 12.00 us 187.00 us >>>> 69 FLUSH >>>> 0.00 147.00 us 78.00 us 367.00 us >>>> 86 REMOVEXATTR >>>> 0.00 223.46 us 24.00 us 1166.00 us >>>> 149 READDIR >>>> 0.00 565.34 us 76.00 us 3639.00 us >>>> 88 FTRUNCATE >>>> 0.00 263.28 us 20.00 us 28385.00 us >>>> 228 LK >>>> 0.00 98.84 us 2.00 us 880.00 us >>>> 1198 OPENDIR >>>> 0.00 91.59 us 26.00 us 10371.00 us >>>> 3853 STATFS >>>> 0.00 494.14 us 17.00 us 193439.00 us >>>> 1171 GETXATTR >>>> 0.00 299.42 us 35.00 us 9799.00 us >>>> 2044 READDIRP >>>> 0.00 1965.31 us 110.00 us 382258.00 us >>>> 321 XATTROP >>>> 0.01 113.40 us 24.00 us 61061.00 us >>>> 8134 STAT >>>> 0.01 755.38 us 57.00 us 607603.00 us >>>> 3196 DISCARD >>>> 0.05 2690.09 us 58.00 us 2704761.00 us >>>> 3206 OPEN >>>> 0.10 119978.25 us 97.00 us 9406684.00 us >>>> 154 SETATTR >>>> 0.18 101.73 us 28.00 us 700477.00 us >>>> 313379 FSTAT >>>> 0.23 1059.84 us 25.00 us 2716124.00 us >>>> 38255 LOOKUP >>>> 0.47 1024.11 us 54.00 us 6197164.00 us >>>> 81455 FXATTROP >>>> 1.72 2984.00 us 15.00 us 37098954.00 us >>>> 103020 FINODELK >>>> 5.92 44315.32 us 51.00 us 24731536.00 us >>>> 23957 FSYNC >>>> 13.27 2399.78 us 25.00 us 22089540.00 us >>>> 991005 READ >>>> 37.00 5980.43 us 52.00 us 22099889.00 us >>>> 1108976 WRITE >>>> 41.04 5452.75 us 13.00 us 22102452.00 us >>>> 1349053 INODELK >>>> >>>> Duration: 10026 seconds >>>> Data Read: 80046027759 bytes >>>> Data Written: 44496632320 bytes >>>> >>>> Interval 1 Stats: >>>> Block Size: 256b+ 512b+ >>>> 1024b+ >>>> No. of Reads: 983 2696 >>>> 1059 >>>> No. of Writes: 0 838 >>>> 185 >>>> >>>> Block Size: 2048b+ 4096b+ >>>> 8192b+ >>>> No. of Reads: 852 85856 >>>> 51575 >>>> No. of Writes: 382 705802 >>>> 57812 >>>> >>>> Block Size: 16384b+ 32768b+ >>>> 65536b+ >>>> No. of Reads: 52673 232093 >>>> 14984 >>>> No. of Writes: 13499 4908 >>>> 4242 >>>> >>>> Block Size: 131072b+ >>>> No. of Reads: 460040 >>>> No. of Writes: 6411 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 2093 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 1093 RELEASEDIR >>>> 0.00 53.38 us 26.00 us 111.00 us >>>> 16 FLUSH >>>> 0.00 145.14 us 78.00 us 367.00 us >>>> 71 REMOVEXATTR >>>> 0.00 190.96 us 114.00 us 298.00 us >>>> 71 SETATTR >>>> 0.00 213.38 us 24.00 us 1145.00 us >>>> 90 READDIR >>>> 0.00 263.28 us 20.00 us 28385.00 us >>>> 228 LK >>>> 0.00 101.76 us 2.00 us 880.00 us >>>> 1093 OPENDIR >>>> 0.01 93.60 us 27.00 us 10371.00 us >>>> 3090 STATFS >>>> 0.02 537.47 us 17.00 us 193439.00 us >>>> 1038 GETXATTR >>>> 0.03 297.44 us 35.00 us 9799.00 us >>>> 1990 READDIRP >>>> 0.03 2357.28 us 110.00 us 382258.00 us >>>> 253 XATTROP >>>> 0.04 385.93 us 58.00 us 47593.00 us >>>> 2091 OPEN >>>> 0.04 114.86 us 24.00 us 61061.00 us >>>> 7715 STAT >>>> 0.06 444.59 us 57.00 us 333240.00 us >>>> 3053 DISCARD >>>> 0.42 316.24 us 25.00 us 290728.00 us >>>> 29823 LOOKUP >>>> 0.73 257.92 us 54.00 us 344812.00 us >>>> 63296 FXATTROP >>>> 1.37 98.30 us 28.00 us 67621.00 us >>>> 313172 FSTAT >>>> 1.58 2124.69 us 51.00 us 849200.00 us >>>> 16717 FSYNC >>>> 5.73 162.46 us 52.00 us 748492.00 us >>>> 794079 WRITE >>>> 7.19 2065.17 us 16.00 us 37098954.00 us >>>> 78381 FINODELK >>>> 36.44 886.32 us 25.00 us 2216436.00 us >>>> 925421 READ >>>> 46.30 1178.04 us 13.00 us 1700704.00 us >>>> 884635 INODELK >>>> >>>> Duration: 7485 seconds >>>> Data Read: 71250527215 bytes >>>> Data Written: 5119903744 bytes >>>> >>>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>>> ---------------------------------------------- >>>> Cumulative Stats: >>>> Block Size: 1b+ >>>> No. of Reads: 0 >>>> No. of Writes: 3264419 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 90 FORGET >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 9462 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 4254 RELEASEDIR >>>> 0.00 50.52 us 13.00 us 190.00 us >>>> 71 FLUSH >>>> 0.00 186.97 us 87.00 us 713.00 us >>>> 86 REMOVEXATTR >>>> 0.00 79.32 us 33.00 us 189.00 us >>>> 228 LK >>>> 0.00 220.98 us 129.00 us 513.00 us >>>> 86 SETATTR >>>> 0.01 259.30 us 26.00 us 2632.00 us >>>> 137 READDIR >>>> 0.02 322.76 us 145.00 us 2125.00 us >>>> 321 XATTROP >>>> 0.03 109.55 us 2.00 us 1258.00 us >>>> 1193 OPENDIR >>>> 0.05 70.21 us 21.00 us 431.00 us >>>> 3196 DISCARD >>>> 0.05 169.26 us 21.00 us 2315.00 us >>>> 1545 GETXATTR >>>> 0.12 176.85 us 63.00 us 2844.00 us >>>> 3206 OPEN >>>> 0.61 303.49 us 90.00 us 3085.00 us >>>> 9633 FSTAT >>>> 2.44 305.66 us 28.00 us 3716.00 us >>>> 38230 LOOKUP >>>> 4.52 266.22 us 55.00 us 53424.00 us >>>> 81455 FXATTROP >>>> 6.96 1397.99 us 51.00 us 64822.00 us >>>> 23889 FSYNC >>>> 16.48 84.74 us 25.00 us 6917.00 us >>>> 932592 WRITE >>>> 30.16 106.90 us 13.00 us 3920189.00 us >>>> 1353046 INODELK >>>> 38.55 1794.52 us 14.00 us 16210553.00 us >>>> 103039 FINODELK >>>> >>>> Duration: 66562 seconds >>>> Data Read: 0 bytes >>>> Data Written: 3264419 bytes >>>> >>>> Interval 1 Stats: >>>> Block Size: 1b+ >>>> No. of Reads: 0 >>>> No. of Writes: 794080 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 2093 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 1093 RELEASEDIR >>>> 0.00 70.31 us 26.00 us 125.00 us >>>> 16 FLUSH >>>> 0.00 193.10 us 103.00 us 713.00 us >>>> 71 REMOVEXATTR >>>> 0.01 227.32 us 133.00 us 513.00 us >>>> 71 SETATTR >>>> 0.01 79.32 us 33.00 us 189.00 us >>>> 228 LK >>>> 0.01 259.83 us 35.00 us 1138.00 us >>>> 89 READDIR >>>> 0.03 318.26 us 145.00 us 2047.00 us >>>> 253 XATTROP >>>> 0.04 112.67 us 3.00 us 1258.00 us >>>> 1093 OPENDIR >>>> 0.06 167.98 us 23.00 us 1951.00 us >>>> 1014 GETXATTR >>>> 0.08 70.97 us 22.00 us 431.00 us >>>> 3053 DISCARD >>>> 0.13 183.78 us 66.00 us 2844.00 us >>>> 2091 OPEN >>>> 1.01 303.82 us 90.00 us 3085.00 us >>>> 9610 FSTAT >>>> 3.27 316.59 us 30.00 us 3716.00 us >>>> 29820 LOOKUP >>>> 5.83 265.79 us 59.00 us 53424.00 us >>>> 63296 FXATTROP >>>> 7.95 1373.89 us 51.00 us 64822.00 us >>>> 16717 FSYNC >>>> 23.17 851.99 us 14.00 us 16210553.00 us >>>> 78555 FINODELK >>>> 24.04 87.44 us 27.00 us 6917.00 us >>>> 794081 WRITE >>>> 34.36 111.91 us 14.00 us 984871.00 us >>>> 886790 INODELK >>>> >>>> Duration: 7485 seconds >>>> Data Read: 0 bytes >>>> Data Written: 794080 bytes >>>> >>>> >>>> ----------------------- >>>> Here is the data from the volume that is backed by the SHDDs and >>>> has one failed disk: >>>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>>> -------------------------------------------- >>>> Cumulative Stats: >>>> Block Size: 256b+ 512b+ >>>> 1024b+ >>>> No. of Reads: 1702 86 >>>> 16 >>>> No. of Writes: 0 767 >>>> 71 >>>> >>>> Block Size: 2048b+ 4096b+ >>>> 8192b+ >>>> No. of Reads: 19 51841 >>>> 2049 >>>> No. of Writes: 76 60668 >>>> 35727 >>>> >>>> Block Size: 16384b+ 32768b+ >>>> 65536b+ >>>> No. of Reads: 1744 639 >>>> 1088 >>>> No. of Writes: 8524 2410 >>>> 1285 >>>> >>>> Block Size: 131072b+ >>>> No. of Reads: 771999 >>>> No. of Writes: 29584 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 2902 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 1517 RELEASEDIR >>>> 0.00 197.00 us 197.00 us 197.00 us >>>> 1 FTRUNCATE >>>> 0.00 70.24 us 16.00 us 758.00 us >>>> 51 FLUSH >>>> 0.00 143.93 us 82.00 us 305.00 us >>>> 57 REMOVEXATTR >>>> 0.00 178.63 us 105.00 us 712.00 us >>>> 60 SETATTR >>>> 0.00 67.30 us 19.00 us 572.00 us >>>> 555 LK >>>> 0.00 322.80 us 23.00 us 4673.00 us >>>> 138 READDIR >>>> 0.00 336.56 us 106.00 us 11994.00 us >>>> 237 XATTROP >>>> 0.00 84.70 us 28.00 us 1071.00 us >>>> 3469 STATFS >>>> 0.01 387.75 us 2.00 us 146017.00 us >>>> 1467 OPENDIR >>>> 0.01 148.59 us 21.00 us 64374.00 us >>>> 4454 STAT >>>> 0.02 783.02 us 16.00 us 93502.00 us >>>> 1902 GETXATTR >>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>> 1364 ENTRYLK >>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>> 1064 READDIRP >>>> 0.07 85.74 us 19.00 us 68340.00 us >>>> 62849 FSTAT >>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>> 2729 OPEN >>>> 0.22 708.57 us 15.00 us 394799.00 us >>>> 25447 LOOKUP >>>> 5.94 2331.74 us 15.00 us 1099530.00 us >>>> 207534 FINODELK >>>> 7.31 8311.75 us 58.00 us 1800216.00 us >>>> 71668 FXATTROP >>>> 12.49 7735.19 us 51.00 us 3595513.00 us >>>> 131642 WRITE >>>> 17.70 957.08 us 16.00 us 13700466.00 us >>>> 1508160 INODELK >>>> 24.55 2546.43 us 26.00 us 5077347.00 us >>>> 786060 READ >>>> 31.56 49699.15 us 47.00 us 3746331.00 us >>>> 51777 FSYNC >>>> >>>> Duration: 10101 seconds >>>> Data Read: 101562897361 bytes >>>> Data Written: 4834450432 bytes >>>> >>>> Interval 0 Stats: >>>> Block Size: 256b+ 512b+ >>>> 1024b+ >>>> No. of Reads: 1702 86 >>>> 16 >>>> No. of Writes: 0 767 >>>> 71 >>>> >>>> Block Size: 2048b+ 4096b+ >>>> 8192b+ >>>> No. of Reads: 19 51841 >>>> 2049 >>>> No. of Writes: 76 60668 >>>> 35727 >>>> >>>> Block Size: 16384b+ 32768b+ >>>> 65536b+ >>>> No. of Reads: 1744 639 >>>> 1088 >>>> No. of Writes: 8524 2410 >>>> 1285 >>>> >>>> Block Size: 131072b+ >>>> No. of Reads: 771999 >>>> No. of Writes: 29584 >>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>> calls Fop >>>> --------- ----------- ----------- ----------- >>>> ------------ ---- >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 2902 RELEASE >>>> 0.00 0.00 us 0.00 us 0.00 us >>>> 1517 RELEASEDIR >>>> 0.00 197.00 us 197.00 us 197.00 us >>>> 1 FTRUNCATE >>>> 0.00 70.24 us 16.00 us 758.00 us >>>> 51 FLUSH >>>> 0.00 143.93 us 82.00 us 305.00 us >>>> 57 REMOVEXATTR >>>> 0.00 178.63 us 105.00 us 712.00 us >>>> 60 SETATTR >>>> 0.00 67.30 us 19.00 us 572.00 us >>>> 555 LK >>>> 0.00 322.80 us 23.00 us 4673.00 us >>>> 138 READDIR >>>> 0.00 336.56 us 106.00 us 11994.00 us >>>> 237 XATTROP >>>> 0.00 84.70 us 28.00 us 1071.00 us >>>> 3469 STATFS >>>> 0.01 387.75 us 2.00 us 146017.00 us >>>> 1467 OPENDIR >>>> 0.01 148.59 us 21.00 us 64374.00 us >>>> 4454 STAT >>>> 0.02 783.02 us 16.00 us 93502.00 us >>>> 1902 GETXATTR >>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>> 1364 ENTRYLK >>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>> 1064 READDIRP >>>> 0.07 85.73 us 19.00 us 68340.00 us >>>> 62849 FSTAT >>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>> 2729 OPEN >>>> 0.22 708.57 us 15.00 us 394799.00 us >>>> 25447 LOOKUP >>>> 5.94 2334.57 us 15.00 us 1099530.00 us >>>> 207534 FINODELK >>>> 7.31 8311.49 us 58.00 us 1800216.00 us >>>> 71668 FXATTROP >>>> 12.49 7735.32 us 51.00 us 3595513.00 us >>>> 131642 WRITE >>>> 17.71 957.08 us 16.00 us 13700466.00 us >>>> 1508160 INODELK >>>> 24.56 2546.42 us 26.00 us 5077347.00 us >>>> 786060 READ >>>> 31.54 49651.63 us 47.00 us 3746331.00 us >>>> 51777 FSYNC >>>> >>>> Duration: 10101 seconds >>>> Data Read: 101562897361 bytes >>>> Data Written: 4834450432 bytes >>>> >>>> >>>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> Thank you for your response. >>>>> >>>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. >>>>> replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th >>>>> volume is replica 3, with a brick on all three ovirt machines. >>>>> >>>>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >>>>> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >>>>> IO failures, and that brick is offline. However, the other two replicas >>>>> are fully operational (although they still show contents in the heal info >>>>> command that won't go away, but that may be the case until I replace the >>>>> failed disk). >>>>> >>>>> What is bothering me is that ALL 4 gluster volumes are showing >>>>> horrible performance issues. At this point, as the bad disk has been >>>>> completely offlined, I would expect gluster to perform at normal speed, but >>>>> that is definitely not the case. >>>>> >>>>> I've also noticed that the performance hits seem to come in >>>>> waves: things seem to work acceptably (but slow) for a while, then >>>>> suddenly, its as if all disk IO on all volumes (including non-gluster local >>>>> OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes >>>>> again. During those times, I start getting VM not responding and host not >>>>> responding notices as well as the applications having major issues. >>>>> >>>>> I've shut down most of my VMs and am down to just my essential >>>>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>>>> issues. >>>>> >>>>> Am I correct in believing that once the failed disk was brought >>>>> offline that performance should return to normal? >>>>> >>>>> On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com >>>>> > wrote: >>>>> >>>>>> I would check disks status and accessibility of mount points >>>>>> where your gluster volumes reside. >>>>>> >>>>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>>>> wrote: >>>>>> >>>>>>> On one ovirt server, I'm now seeing these messages: >>>>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 0 >>>>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905945472 >>>>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905945584 >>>>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 2048 >>>>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905943424 >>>>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905943536 >>>>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 0 >>>>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905945472 >>>>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 3905945584 >>>>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, sector >>>>>>> 2048 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> I see in messages on ovirt3 (my 3rd machine, the one upgraded >>>>>>>> to 4.2): >>>>>>>> >>>>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: >>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>> database connection failed (No such file or directory) >>>>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: >>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>> database connection failed (No such file or directory) >>>>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: >>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>> database connection failed (No such file or directory) >>>>>>>> (appears a lot). >>>>>>>> >>>>>>>> I also found on the ssh session of that, some sysv warnings >>>>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>>>> my understanding that it should continue to work with the other two >>>>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>>>> much faster. >>>>>>>> >>>>>>>> Gluster generates a bunch of different log files, I don't >>>>>>>> know what ones you want, or from which machine(s). >>>>>>>> >>>>>>>> How do I do "volume profiling"? >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>>>> sabose@redhat.com> wrote: >>>>>>>> >>>>>>>>> Do you see errors reported in the mount logs for the volume? >>>>>>>>> If so, could you attach the logs? >>>>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>>>> output of volume profiling? >>>>>>>>> >>>>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>> >>>>>>>>>> Ok, things have gotten MUCH worse this morning. I'm >>>>>>>>>> getting random errors from VMs, right now, about a third of my VMs have >>>>>>>>>> been paused due to storage issues, and most of the remaining VMs are not >>>>>>>>>> performing well. >>>>>>>>>> >>>>>>>>>> At this point, I am in full EMERGENCY mode, as my >>>>>>>>>> production services are now impacted, and I'm getting calls coming in with >>>>>>>>>> problems... >>>>>>>>>> >>>>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>>>> origional post). Is all this storage related? >>>>>>>>>> >>>>>>>>>> I also have two different gluster volumes for VM storage, >>>>>>>>>> and only one had the issues, but now VMs in both are being affected at the >>>>>>>>>> same time and same way. >>>>>>>>>> >>>>>>>>>> --Jim >>>>>>>>>> >>>>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>>>> >>>>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hello: >>>>>>>>>>>> >>>>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>>>> >>>>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I >>>>>>>>>>>> upgraded the hosted engine first, then engine 3. When engine 3 came back >>>>>>>>>>>> up, for some reason one of my gluster volumes would not sync. Here's >>>>>>>>>>>> sample output: >>>>>>>>>>>> >>>>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>> Status: Connected >>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>> >>>>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>> Status: Connected >>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>> >>>>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>> Status: Connected >>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>> >>>>>>>>>>>> --------- >>>>>>>>>>>> Its been in this state for a couple days now, and >>>>>>>>>>>> bandwidth monitoring shows no appreciable data moving. I've tried >>>>>>>>>>>> repeatedly commanding a full heal from all three clusters in the node. Its >>>>>>>>>>>> always the same files that need healing. >>>>>>>>>>>> >>>>>>>>>>>> When running gluster volume heal data-hdd statistics, I >>>>>>>>>>>> see sometimes different information, but always some number of "heal >>>>>>>>>>>> failed" entries. It shows 0 for split brain. >>>>>>>>>>>> >>>>>>>>>>>> I'm not quite sure what to do. I suspect it may be due >>>>>>>>>>>> to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>>>> >>>>>>>>>>>> Second issue: I've been experiencing VERY POOR >>>>>>>>>>>> performance on most of my VMs. To the tune that logging into a windows 10 >>>>>>>>>>>> vm via remote desktop can take 5 minutes, launching quickbooks inside said >>>>>>>>>>>> vm can easily take 10 minutes. On some linux VMs, I get random messages >>>>>>>>>>>> like this: >>>>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>>>> >>>>>>>>>>>> (the process and PID are often different) >>>>>>>>>>>> >>>>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>>>> >>>>>>>>>>>> Thanks! >>>>>>>>>>>> --Jim >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>>>> vacy-policy/ >>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>> y/about/community-guidelines/ >>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>>>> S5MF/ >>>>>>> >>>>>> >>>>> >>>> >>> >> > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/communit > y/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archiv > es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/ > >

Ok, made it through this one finally. Turns out there's a little log file "sanlock.log" that had the magic info in it...the ...dom_md/ids file got messed with in the gluster crash, had to recover that from a backup, then run the hosted-engine --reinitialize-lockspace --force, and then it started working again. Now, for some reason, my hosted engine starts up, but has no networking and the hosted-engine --console doesn't work either...Still looking into this now. On Wed, May 30, 2018 at 1:29 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, right now I think this message from vdsm.log captures my blocking problem:
2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={u'c0acdefb-7d16-48ec-9d76-659b8fe33e2a': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.00142076', 'lastCheck': '1.1', 'valid': True}} from=::1,60684, task_id=7da641b6-9a98-4db7-aafb-04ff84228cf5 (api:52) 2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-05-30 01:27:23,411-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 01:27:23,411-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
The strange thing is I can't figure out what device has no space left. The only hits on google talk about deleting __DIRECT_IO_TEST__ file in the storage repo, which I tried, but it did not help. Perhaps one of the files in the engine volume stores locks, and that is full/corrupt?
--Jim
On Wed, May 30, 2018 at 12:25 AM, Jim Kusznir <jim@palousetech.com> wrote:
I tried to capture a consistent log slice accros the cluster from agent.log. Several systems claim they are starting the engine, but virsh never shows any sign of it. They also claim that a uuid that I'm pretty sure is not affiliated with my engine is not found.
--------- ovirt1: --------- MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent [root@ovirt1 vdsm]# tail -n 20 ../ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:14:02,154::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-30 00:14:02,173::states::776::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-05-30 00:14:02,174::state_decorators ::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-05-30 00:14:02,533::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-05-30 00:14:02,843::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f- 809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:18:09,253::hosted_engine::1007:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-05-30 00:18:11,159::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:18:11,160::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:18:11,160::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:22:17,692::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt2.nwfiber.com (id 2): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=4348 (Wed May 30 00:16:46 2018)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=4349 (Wed May 30 00:16:47 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'ovirt2.nwfiber.com', 'alive': True, 'host-id': 2, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '5ee0a557', 'local_conf_timestamp': 4349, 'host-ts': 4348} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=100036 (Wed May 30 00:16:58 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=100037 (Wed May 30 00:16:58 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStart\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'e2fadf31', 'local_conf_timestamp': 100037, 'host-ts': 100036} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 1): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 30186.0, 'maintenance': False, 'cpu-load': 0.0308, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:22:17,714::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:22:18,309::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:22:18,624::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f- 809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
---------- ovirt2 ---------- MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:08:34,545::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,627::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:12:40,643::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,827::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f- 809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:16:46,635::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:16:46,636::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt1.nwfiber.com (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=7614 (Wed May 30 00:09:54 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=7614 (Wed May 30 00:09:54 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStarting\nstopped=False\n', 'hostname': 'ovirt1.nwfiber.com', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '1fa13e84', 'local_conf_timestamp': 7614, 'host-ts': 7614} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=99541 (Wed May 30 00:08:43 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=99542 (Wed May 30 00:08:43 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineForceStop\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '56f87e2c', 'local_conf_timestamp': 99542, 'host-ts': 99541} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'bridge': True, 'mem-free': 30891.0, 'maintenance': False, 'cpu-load': 0.0186, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:16:46,919::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUnexpectedlyDown-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:20:52,930::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:20:53,203::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:20:53,393::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f- 809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
-------- ovirt3 ------- MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent [root@ovirt3 images]# tail -n 20 /var/log/ovirt-hosted-engine-h a/agent.log MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:12:51,793::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:12:52,157::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:16:58,237::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:16:58,256::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:16:58,595::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:16:58,816::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f- 809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:21:05,082::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400) MainThread::INFO::2018-05-30 00:21:05,083::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:21:05,125::hosted_engine::9 68::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM MainThread::INFO::2018-05-30 00:21:05,141::hosted_engine::9 88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 80::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent
---------- note that virsh does not show engine started on this machine....
On Wed, May 30, 2018 at 12:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False ---------------
I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there:
[root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there.
On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data-c enter-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
I believe the gluster data store for the engine is up and working correctly. The rest are not mounted, as the engine hasn't managed to start correctly yet. I did perform the copy-a-junk-file into the data store, then check to ensure its there on another host, then delete that and see that it disappeared on the first host; it passed that test. Here's the info and status. (I have NOT performed the steps that Krutika and Ravishankar suggested yet, as I don't have my data volumes working again yet.
[root@ovirt2 images]# gluster volume info
Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on
Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on
Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
[root@ovirt2 images]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 3226 Brick ovirt2.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2967 Brick ovirt3.nwfiber.com:/gluster/brick2/da ta 49152 0 Y 2554 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: data-hdd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y 3232 Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y 2976 Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 4818 NFS Server on ovirt3.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 NFS Server on ovirt1.nwfiber.com N/A N/A N N/A Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume data-hdd ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 3239 Brick ovirt2.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2982 Brick ovirt3.nwfiber.com:/gluster/brick1/en gine 49154 0 Y 2578 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume engine ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick ovirt1.nwfiber.com:/gluster/brick4/is o 49155 0 Y 3247 Brick ovirt2.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2990 Brick ovirt3.nwfiber.com:/gluster/brick4/is o 49155 0 Y 2580 Self-heal Daemon on localhost N/A N/A Y 4818 Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y 17853 Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y 4771
Task Status of Volume iso ------------------------------------------------------------ ------------------ There are no active volume tasks
On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> wrote:
> > > On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> > wrote: > >> hosted-engine --deploy failed (would not come up on my existing >> gluster storage). However, I realized no changes were written to my >> existing storage. So, I went back to trying to get my old engine running. >> >> hosted-engine --vm-status is now taking a very long time >> (5+minutes) to return, and it returns stail information everywhere. I >> thought perhaps the lockspace is corrupt, so tried to clean that and >> metadata, but both are failing (--cleam-metadata has hung and I can't even >> ctrl-c out of it). >> >> How can I reinitialize all the lockspace/metadata safely? There is >> no engine or VMs running currently.... >> > > I think the first thing to make sure is that your storage is up and > running. So can you mount the gluster volumes and able to access the > contents there? > Please provide the gluster volume info and gluster volume status of > the volumes that you're using. > > > >> --Jim >> >> On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> Well, things went from bad to very, very bad.... >>> >>> It appears that during one of the 2 minute lockups, the fencing >>> agents decided that another node in the cluster was down. As a result, 2 >>> of the 3 nodes were simultaneously reset with fencing agent reboot. After >>> the nodes came back up, the engine would not start. All running VMs >>> (including VMs on the 3rd node that was not rebooted) crashed. >>> >>> I've now been working for about 3 hours trying to get the engine >>> to come up. I don't know why it won't start. hosted-engine --vm-start >>> says its starting, but it doesn't start (virsh doesn't show any VMs >>> running). I'm currently running --deploy, as I had run out of options for >>> anything else I can come up with. I hope this will allow me to re-import >>> all my existing VMs and allow me to start them back up after everything >>> comes back up. >>> >>> I do have an unverified geo-rep backup; I don't know if it is a >>> good backup (there were several prior messages to this list, but I didn't >>> get replies to my questions. It was running in what I believe to be >>> "strange", and the data directories are larger than their source). >>> >>> I'll see if my --deploy works, and if not, I'll be back with >>> another message/help request. >>> >>> When the dust settles and I'm at least minimally functional again, >>> I really want to understand why all these technologies designed to offer >>> redundancy conspired to reduce uptime and create failures where there >>> weren't any otherwise. I thought with hosted engine, 3 ovirt servers and >>> glusterfs with minimum replica 2+arb or replica 3 should have offered >>> strong resilience against server failure or disk failure, and should have >>> prevented / recovered from data corruption. Instead, all of the above >>> happened (once I get my cluster back up, I still have to try and recover my >>> webserver VM, which won't boot due to XFS corrupt journal issues created >>> during the gluster crashes). I think a lot of these issues were rooted >>> from the upgrade from 4.1 to 4.2. >>> >>> --Jim >>> >>> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> I also finally found the following in my system log on one server: >>>> >>>> [10679.524491] INFO: task glusterclogro:14933 blocked for more >>>> than 120 seconds. >>>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >>>> 1 0x00000080 >>>> [10679.527150] Call Trace: >>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [10679.527218] [<ffffffffc060e388>] >>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>> [xfs] >>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [10679.527268] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>> [10679.527275] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [10679.527279] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >>>> than 120 seconds. >>>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >>>> 1 0x00000080 >>>> [10679.529961] Call Trace: >>>> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [10679.530003] [<ffffffffc060e388>] >>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>> [xfs] >>>> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [10679.530046] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>> [10679.530054] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [10679.530058] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >>>> than 120 seconds. >>>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >>>> 1 0x00000080 >>>> [10679.533738] Call Trace: >>>> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [10679.533799] [<ffffffffc060e388>] >>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>> [xfs] >>>> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [10679.533858] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>> [10679.533868] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [10679.533873] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [10919.512757] INFO: task glusterclogro:14933 blocked for more >>>> than 120 seconds. >>>> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >>>> 1 0x00000080 >>>> [10919.516677] Call Trace: >>>> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >>>> [10919.516768] [<ffffffffc05e9224>] ? >>>> _xfs_buf_ioapply+0x334/0x460 [xfs] >>>> [10919.516774] [<ffffffffb991432d>] >>>> wait_for_completion+0xfd/0x140 >>>> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 >>>> [xfs] >>>> [10919.516859] [<ffffffffc05eafa9>] >>>> xfs_buf_submit_wait+0xf9/0x1d0 [xfs] >>>> [10919.516902] [<ffffffffc061b279>] ? >>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] >>>> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >>>> [xfs] >>>> [10919.517022] [<ffffffffc061b279>] >>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >>>> [xfs] >>>> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >>>> [xfs] >>>> [10919.517126] [<ffffffffc05c9fee>] >>>> xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] >>>> [10919.517160] [<ffffffffc05d5a1d>] >>>> xfs_dir2_node_lookup+0x4d/0x170 [xfs] >>>> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >>>> [xfs] >>>> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >>>> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] >>>> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >>>> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >>>> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >>>> [10919.517296] [<ffffffffb94d3753>] ? >>>> selinux_file_free_security+0x23/0x30 >>>> [10919.517304] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [10919.517309] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [10919.517313] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [10919.517318] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >>>> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >>>> [10919.517333] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [10919.517339] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more >>>> than 120 seconds. >>>> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >>>> 1 0x00000080 >>>> [11159.498984] Call Trace: >>>> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>> [11159.499056] [<ffffffffc05dd9b7>] ? >>>> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >>>> [11159.499082] [<ffffffffc05dd43e>] ? >>>> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >>>> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >>>> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>> [11159.499097] [<ffffffffb991348d>] >>>> io_schedule_timeout+0xad/0x130 >>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>> [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 >>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>> [11159.499125] [<ffffffffb9394e85>] >>>> grab_cache_page_write_begin+0x55/0xc0 >>>> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >>>> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >>>> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >>>> [11159.499149] [<ffffffffb9485621>] >>>> iomap_file_buffered_write+0xa1/0xe0 >>>> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>> [11159.499182] [<ffffffffc05f025d>] >>>> xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] >>>> [11159.499213] [<ffffffffc05f057d>] >>>> xfs_file_aio_write+0x18d/0x1b0 [xfs] >>>> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>> [11159.499230] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [11159.499234] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [11159.499238] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more >>>> than 120 seconds. >>>> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >>>> 2 0x00000000 >>>> [11279.491671] Call Trace: >>>> [11279.491682] [<ffffffffb92a3a2e>] ? >>>> try_to_del_timer_sync+0x5e/0x90 >>>> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >>>> [xfs] >>>> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >>>> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] >>>> [11279.491849] [<ffffffffc0619e80>] ? >>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >>>> [11279.491913] [<ffffffffc0619e80>] ? >>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >>>> [11279.491926] [<ffffffffb92bb090>] ? >>>> insert_kthread_work+0x40/0x40 >>>> [11279.491932] [<ffffffffb9920677>] >>>> ret_from_fork_nospec_begin+0x21/0x21 >>>> [11279.491936] [<ffffffffb92bb090>] ? >>>> insert_kthread_work+0x40/0x40 >>>> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >>>> than 120 seconds. >>>> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >>>> 1 0x00000080 >>>> [11279.494957] Call Trace: >>>> [11279.494979] [<ffffffffc0309839>] ? >>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>> [11279.494997] [<ffffffffc0309d98>] ? >>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>> [11279.495001] [<ffffffffb991348d>] >>>> io_schedule_timeout+0xad/0x130 >>>> [11279.495005] [<ffffffffb99145ad>] >>>> wait_for_completion_io+0xfd/0x140 >>>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>>> [11279.495049] [<ffffffffc06064b9>] >>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>> [xfs] >>>> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [11279.495090] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>> [11279.495098] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [11279.495102] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >>>> than 120 seconds. >>>> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >>>> 1 0x00000080 >>>> [11279.498118] Call Trace: >>>> [11279.498134] [<ffffffffc0309839>] ? >>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>> [11279.498152] [<ffffffffc0309d98>] ? >>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>> [11279.498156] [<ffffffffb991348d>] >>>> io_schedule_timeout+0xad/0x130 >>>> [11279.498160] [<ffffffffb99145ad>] >>>> wait_for_completion_io+0xfd/0x140 >>>> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 >>>> [11279.498202] [<ffffffffc06064b9>] >>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>> [xfs] >>>> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [11279.498242] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>> [11279.498250] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [11279.498254] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more >>>> than 120 seconds. >>>> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >>>> 1 0x00000080 >>>> [11279.501348] Call Trace: >>>> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11279.501390] [<ffffffffc060e388>] >>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>> [xfs] >>>> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >>>> [11279.501461] [<ffffffffc05f0545>] >>>> xfs_file_aio_write+0x155/0x1b0 [xfs] >>>> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>> [11279.501479] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [11279.501483] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [11279.501489] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more >>>> than 120 seconds. >>>> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>> disables this message. >>>> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >>>> 1 0x00000080 >>>> [11279.504635] Call Trace: >>>> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>> [11279.504676] [<ffffffffc060e388>] >>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>> [xfs] >>>> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>> [11279.504718] [<ffffffffb992076f>] ? >>>> system_call_after_swapgs+0xbc/0x160 >>>> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>> [11279.504725] [<ffffffffb992082f>] >>>> system_call_fastpath+0x1c/0x21 >>>> [11279.504730] [<ffffffffb992077b>] ? >>>> system_call_after_swapgs+0xc8/0x160 >>>> [12127.466494] perf: interrupt took too long (8263 > 8150), >>>> lowering kernel.perf_event_max_sample_rate to 24000 >>>> >>>> -------------------- >>>> I think this is the cause of the massive ovirt performance issues >>>> irrespective of gluster volume. At the time this happened, I was also >>>> ssh'ed into the host, and was doing some rpm querry commands. I had just >>>> run rpm -qa |grep glusterfs (to verify what version was actually >>>> installed), and that command took almost 2 minutes to return! Normally it >>>> takes less than 2 seconds. That is all pure local SSD IO, too.... >>>> >>>> I'm no expert, but its my understanding that anytime a software >>>> causes these kinds of issues, its a serious bug in the software, even if >>>> its mis-handled exceptions. Is this correct? >>>> >>>> --Jim >>>> >>>> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> I think this is the profile information for one of the volumes >>>>> that lives on the SSDs and is fully operational with no down/problem disks: >>>>> >>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>>>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>>>> ---------------------------------------------- >>>>> Cumulative Stats: >>>>> Block Size: 256b+ 512b+ >>>>> 1024b+ >>>>> No. of Reads: 983 2696 >>>>> 1059 >>>>> No. of Writes: 0 1113 >>>>> 302 >>>>> >>>>> Block Size: 2048b+ 4096b+ >>>>> 8192b+ >>>>> No. of Reads: 852 88608 >>>>> 53526 >>>>> No. of Writes: 522 812340 >>>>> 76257 >>>>> >>>>> Block Size: 16384b+ 32768b+ >>>>> 65536b+ >>>>> No. of Reads: 54351 241901 >>>>> 15024 >>>>> No. of Writes: 21636 8656 >>>>> 8976 >>>>> >>>>> Block Size: 131072b+ >>>>> No. of Reads: 524156 >>>>> No. of Writes: 296071 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 4189 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 1257 RELEASEDIR >>>>> 0.00 46.19 us 12.00 us 187.00 us >>>>> 69 FLUSH >>>>> 0.00 147.00 us 78.00 us 367.00 us >>>>> 86 REMOVEXATTR >>>>> 0.00 223.46 us 24.00 us 1166.00 us >>>>> 149 READDIR >>>>> 0.00 565.34 us 76.00 us 3639.00 us >>>>> 88 FTRUNCATE >>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>> 228 LK >>>>> 0.00 98.84 us 2.00 us 880.00 us >>>>> 1198 OPENDIR >>>>> 0.00 91.59 us 26.00 us 10371.00 us >>>>> 3853 STATFS >>>>> 0.00 494.14 us 17.00 us 193439.00 us >>>>> 1171 GETXATTR >>>>> 0.00 299.42 us 35.00 us 9799.00 us >>>>> 2044 READDIRP >>>>> 0.00 1965.31 us 110.00 us 382258.00 us >>>>> 321 XATTROP >>>>> 0.01 113.40 us 24.00 us 61061.00 us >>>>> 8134 STAT >>>>> 0.01 755.38 us 57.00 us 607603.00 us >>>>> 3196 DISCARD >>>>> 0.05 2690.09 us 58.00 us 2704761.00 us >>>>> 3206 OPEN >>>>> 0.10 119978.25 us 97.00 us 9406684.00 us >>>>> 154 SETATTR >>>>> 0.18 101.73 us 28.00 us 700477.00 us >>>>> 313379 FSTAT >>>>> 0.23 1059.84 us 25.00 us 2716124.00 us >>>>> 38255 LOOKUP >>>>> 0.47 1024.11 us 54.00 us 6197164.00 us >>>>> 81455 FXATTROP >>>>> 1.72 2984.00 us 15.00 us 37098954.00 us >>>>> 103020 FINODELK >>>>> 5.92 44315.32 us 51.00 us 24731536.00 us >>>>> 23957 FSYNC >>>>> 13.27 2399.78 us 25.00 us 22089540.00 us >>>>> 991005 READ >>>>> 37.00 5980.43 us 52.00 us 22099889.00 us >>>>> 1108976 WRITE >>>>> 41.04 5452.75 us 13.00 us 22102452.00 us >>>>> 1349053 INODELK >>>>> >>>>> Duration: 10026 seconds >>>>> Data Read: 80046027759 bytes >>>>> Data Written: 44496632320 bytes >>>>> >>>>> Interval 1 Stats: >>>>> Block Size: 256b+ 512b+ >>>>> 1024b+ >>>>> No. of Reads: 983 2696 >>>>> 1059 >>>>> No. of Writes: 0 838 >>>>> 185 >>>>> >>>>> Block Size: 2048b+ 4096b+ >>>>> 8192b+ >>>>> No. of Reads: 852 85856 >>>>> 51575 >>>>> No. of Writes: 382 705802 >>>>> 57812 >>>>> >>>>> Block Size: 16384b+ 32768b+ >>>>> 65536b+ >>>>> No. of Reads: 52673 232093 >>>>> 14984 >>>>> No. of Writes: 13499 4908 >>>>> 4242 >>>>> >>>>> Block Size: 131072b+ >>>>> No. of Reads: 460040 >>>>> No. of Writes: 6411 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 2093 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 1093 RELEASEDIR >>>>> 0.00 53.38 us 26.00 us 111.00 us >>>>> 16 FLUSH >>>>> 0.00 145.14 us 78.00 us 367.00 us >>>>> 71 REMOVEXATTR >>>>> 0.00 190.96 us 114.00 us 298.00 us >>>>> 71 SETATTR >>>>> 0.00 213.38 us 24.00 us 1145.00 us >>>>> 90 READDIR >>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>> 228 LK >>>>> 0.00 101.76 us 2.00 us 880.00 us >>>>> 1093 OPENDIR >>>>> 0.01 93.60 us 27.00 us 10371.00 us >>>>> 3090 STATFS >>>>> 0.02 537.47 us 17.00 us 193439.00 us >>>>> 1038 GETXATTR >>>>> 0.03 297.44 us 35.00 us 9799.00 us >>>>> 1990 READDIRP >>>>> 0.03 2357.28 us 110.00 us 382258.00 us >>>>> 253 XATTROP >>>>> 0.04 385.93 us 58.00 us 47593.00 us >>>>> 2091 OPEN >>>>> 0.04 114.86 us 24.00 us 61061.00 us >>>>> 7715 STAT >>>>> 0.06 444.59 us 57.00 us 333240.00 us >>>>> 3053 DISCARD >>>>> 0.42 316.24 us 25.00 us 290728.00 us >>>>> 29823 LOOKUP >>>>> 0.73 257.92 us 54.00 us 344812.00 us >>>>> 63296 FXATTROP >>>>> 1.37 98.30 us 28.00 us 67621.00 us >>>>> 313172 FSTAT >>>>> 1.58 2124.69 us 51.00 us 849200.00 us >>>>> 16717 FSYNC >>>>> 5.73 162.46 us 52.00 us 748492.00 us >>>>> 794079 WRITE >>>>> 7.19 2065.17 us 16.00 us 37098954.00 us >>>>> 78381 FINODELK >>>>> 36.44 886.32 us 25.00 us 2216436.00 us >>>>> 925421 READ >>>>> 46.30 1178.04 us 13.00 us 1700704.00 us >>>>> 884635 INODELK >>>>> >>>>> Duration: 7485 seconds >>>>> Data Read: 71250527215 bytes >>>>> Data Written: 5119903744 bytes >>>>> >>>>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>>>> ---------------------------------------------- >>>>> Cumulative Stats: >>>>> Block Size: 1b+ >>>>> No. of Reads: 0 >>>>> No. of Writes: 3264419 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 90 FORGET >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 9462 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 4254 RELEASEDIR >>>>> 0.00 50.52 us 13.00 us 190.00 us >>>>> 71 FLUSH >>>>> 0.00 186.97 us 87.00 us 713.00 us >>>>> 86 REMOVEXATTR >>>>> 0.00 79.32 us 33.00 us 189.00 us >>>>> 228 LK >>>>> 0.00 220.98 us 129.00 us 513.00 us >>>>> 86 SETATTR >>>>> 0.01 259.30 us 26.00 us 2632.00 us >>>>> 137 READDIR >>>>> 0.02 322.76 us 145.00 us 2125.00 us >>>>> 321 XATTROP >>>>> 0.03 109.55 us 2.00 us 1258.00 us >>>>> 1193 OPENDIR >>>>> 0.05 70.21 us 21.00 us 431.00 us >>>>> 3196 DISCARD >>>>> 0.05 169.26 us 21.00 us 2315.00 us >>>>> 1545 GETXATTR >>>>> 0.12 176.85 us 63.00 us 2844.00 us >>>>> 3206 OPEN >>>>> 0.61 303.49 us 90.00 us 3085.00 us >>>>> 9633 FSTAT >>>>> 2.44 305.66 us 28.00 us 3716.00 us >>>>> 38230 LOOKUP >>>>> 4.52 266.22 us 55.00 us 53424.00 us >>>>> 81455 FXATTROP >>>>> 6.96 1397.99 us 51.00 us 64822.00 us >>>>> 23889 FSYNC >>>>> 16.48 84.74 us 25.00 us 6917.00 us >>>>> 932592 WRITE >>>>> 30.16 106.90 us 13.00 us 3920189.00 us >>>>> 1353046 INODELK >>>>> 38.55 1794.52 us 14.00 us 16210553.00 us >>>>> 103039 FINODELK >>>>> >>>>> Duration: 66562 seconds >>>>> Data Read: 0 bytes >>>>> Data Written: 3264419 bytes >>>>> >>>>> Interval 1 Stats: >>>>> Block Size: 1b+ >>>>> No. of Reads: 0 >>>>> No. of Writes: 794080 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 2093 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 1093 RELEASEDIR >>>>> 0.00 70.31 us 26.00 us 125.00 us >>>>> 16 FLUSH >>>>> 0.00 193.10 us 103.00 us 713.00 us >>>>> 71 REMOVEXATTR >>>>> 0.01 227.32 us 133.00 us 513.00 us >>>>> 71 SETATTR >>>>> 0.01 79.32 us 33.00 us 189.00 us >>>>> 228 LK >>>>> 0.01 259.83 us 35.00 us 1138.00 us >>>>> 89 READDIR >>>>> 0.03 318.26 us 145.00 us 2047.00 us >>>>> 253 XATTROP >>>>> 0.04 112.67 us 3.00 us 1258.00 us >>>>> 1093 OPENDIR >>>>> 0.06 167.98 us 23.00 us 1951.00 us >>>>> 1014 GETXATTR >>>>> 0.08 70.97 us 22.00 us 431.00 us >>>>> 3053 DISCARD >>>>> 0.13 183.78 us 66.00 us 2844.00 us >>>>> 2091 OPEN >>>>> 1.01 303.82 us 90.00 us 3085.00 us >>>>> 9610 FSTAT >>>>> 3.27 316.59 us 30.00 us 3716.00 us >>>>> 29820 LOOKUP >>>>> 5.83 265.79 us 59.00 us 53424.00 us >>>>> 63296 FXATTROP >>>>> 7.95 1373.89 us 51.00 us 64822.00 us >>>>> 16717 FSYNC >>>>> 23.17 851.99 us 14.00 us 16210553.00 us >>>>> 78555 FINODELK >>>>> 24.04 87.44 us 27.00 us 6917.00 us >>>>> 794081 WRITE >>>>> 34.36 111.91 us 14.00 us 984871.00 us >>>>> 886790 INODELK >>>>> >>>>> Duration: 7485 seconds >>>>> Data Read: 0 bytes >>>>> Data Written: 794080 bytes >>>>> >>>>> >>>>> ----------------------- >>>>> Here is the data from the volume that is backed by the SHDDs and >>>>> has one failed disk: >>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>>>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>>>> -------------------------------------------- >>>>> Cumulative Stats: >>>>> Block Size: 256b+ 512b+ >>>>> 1024b+ >>>>> No. of Reads: 1702 86 >>>>> 16 >>>>> No. of Writes: 0 767 >>>>> 71 >>>>> >>>>> Block Size: 2048b+ 4096b+ >>>>> 8192b+ >>>>> No. of Reads: 19 51841 >>>>> 2049 >>>>> No. of Writes: 76 60668 >>>>> 35727 >>>>> >>>>> Block Size: 16384b+ 32768b+ >>>>> 65536b+ >>>>> No. of Reads: 1744 639 >>>>> 1088 >>>>> No. of Writes: 8524 2410 >>>>> 1285 >>>>> >>>>> Block Size: 131072b+ >>>>> No. of Reads: 771999 >>>>> No. of Writes: 29584 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 2902 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 1517 RELEASEDIR >>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>> 1 FTRUNCATE >>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>> 51 FLUSH >>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>> 57 REMOVEXATTR >>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>> 60 SETATTR >>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>> 555 LK >>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>> 138 READDIR >>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>> 237 XATTROP >>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>> 3469 STATFS >>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>> 1467 OPENDIR >>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>> 4454 STAT >>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>> 1902 GETXATTR >>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>> 1364 ENTRYLK >>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>> 1064 READDIRP >>>>> 0.07 85.74 us 19.00 us 68340.00 us >>>>> 62849 FSTAT >>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>> 2729 OPEN >>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>> 25447 LOOKUP >>>>> 5.94 2331.74 us 15.00 us 1099530.00 us >>>>> 207534 FINODELK >>>>> 7.31 8311.75 us 58.00 us 1800216.00 us >>>>> 71668 FXATTROP >>>>> 12.49 7735.19 us 51.00 us 3595513.00 us >>>>> 131642 WRITE >>>>> 17.70 957.08 us 16.00 us 13700466.00 us >>>>> 1508160 INODELK >>>>> 24.55 2546.43 us 26.00 us 5077347.00 us >>>>> 786060 READ >>>>> 31.56 49699.15 us 47.00 us 3746331.00 us >>>>> 51777 FSYNC >>>>> >>>>> Duration: 10101 seconds >>>>> Data Read: 101562897361 bytes >>>>> Data Written: 4834450432 bytes >>>>> >>>>> Interval 0 Stats: >>>>> Block Size: 256b+ 512b+ >>>>> 1024b+ >>>>> No. of Reads: 1702 86 >>>>> 16 >>>>> No. of Writes: 0 767 >>>>> 71 >>>>> >>>>> Block Size: 2048b+ 4096b+ >>>>> 8192b+ >>>>> No. of Reads: 19 51841 >>>>> 2049 >>>>> No. of Writes: 76 60668 >>>>> 35727 >>>>> >>>>> Block Size: 16384b+ 32768b+ >>>>> 65536b+ >>>>> No. of Reads: 1744 639 >>>>> 1088 >>>>> No. of Writes: 8524 2410 >>>>> 1285 >>>>> >>>>> Block Size: 131072b+ >>>>> No. of Reads: 771999 >>>>> No. of Writes: 29584 >>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>> calls Fop >>>>> --------- ----------- ----------- ----------- >>>>> ------------ ---- >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 2902 RELEASE >>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>> 1517 RELEASEDIR >>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>> 1 FTRUNCATE >>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>> 51 FLUSH >>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>> 57 REMOVEXATTR >>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>> 60 SETATTR >>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>> 555 LK >>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>> 138 READDIR >>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>> 237 XATTROP >>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>> 3469 STATFS >>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>> 1467 OPENDIR >>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>> 4454 STAT >>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>> 1902 GETXATTR >>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>> 1364 ENTRYLK >>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>> 1064 READDIRP >>>>> 0.07 85.73 us 19.00 us 68340.00 us >>>>> 62849 FSTAT >>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>> 2729 OPEN >>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>> 25447 LOOKUP >>>>> 5.94 2334.57 us 15.00 us 1099530.00 us >>>>> 207534 FINODELK >>>>> 7.31 8311.49 us 58.00 us 1800216.00 us >>>>> 71668 FXATTROP >>>>> 12.49 7735.32 us 51.00 us 3595513.00 us >>>>> 131642 WRITE >>>>> 17.71 957.08 us 16.00 us 13700466.00 us >>>>> 1508160 INODELK >>>>> 24.56 2546.42 us 26.00 us 5077347.00 us >>>>> 786060 READ >>>>> 31.54 49651.63 us 47.00 us 3746331.00 us >>>>> 51777 FSYNC >>>>> >>>>> Duration: 10101 seconds >>>>> Data Read: 101562897361 bytes >>>>> Data Written: 4834450432 bytes >>>>> >>>>> >>>>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> Thank you for your response. >>>>>> >>>>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. >>>>>> replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th >>>>>> volume is replica 3, with a brick on all three ovirt machines. >>>>>> >>>>>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate >>>>>> SSHD (same in all three machines). On ovirt3, the SSHD has reported hard >>>>>> IO failures, and that brick is offline. However, the other two replicas >>>>>> are fully operational (although they still show contents in the heal info >>>>>> command that won't go away, but that may be the case until I replace the >>>>>> failed disk). >>>>>> >>>>>> What is bothering me is that ALL 4 gluster volumes are showing >>>>>> horrible performance issues. At this point, as the bad disk has been >>>>>> completely offlined, I would expect gluster to perform at normal speed, but >>>>>> that is definitely not the case. >>>>>> >>>>>> I've also noticed that the performance hits seem to come in >>>>>> waves: things seem to work acceptably (but slow) for a while, then >>>>>> suddenly, its as if all disk IO on all volumes (including non-gluster local >>>>>> OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes >>>>>> again. During those times, I start getting VM not responding and host not >>>>>> responding notices as well as the applications having major issues. >>>>>> >>>>>> I've shut down most of my VMs and am down to just my essential >>>>>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>>>>> issues. >>>>>> >>>>>> Am I correct in believing that once the failed disk was brought >>>>>> offline that performance should return to normal? >>>>>> >>>>>> On Tue, May 29, 2018 at 1:27 PM, Alex K < >>>>>> rightkicktech@gmail.com> wrote: >>>>>> >>>>>>> I would check disks status and accessibility of mount points >>>>>>> where your gluster volumes reside. >>>>>>> >>>>>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>>>>> wrote: >>>>>>> >>>>>>>> On one ovirt server, I'm now seeing these messages: >>>>>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 0 >>>>>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905945472 >>>>>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905945584 >>>>>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 2048 >>>>>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905943424 >>>>>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905943536 >>>>>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 0 >>>>>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905945472 >>>>>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 3905945584 >>>>>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, >>>>>>>> sector 2048 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>>>>> jim@palousetech.com> wrote: >>>>>>>> >>>>>>>>> I see in messages on ovirt3 (my 3rd machine, the one >>>>>>>>> upgraded to 4.2): >>>>>>>>> >>>>>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: >>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>> database connection failed (No such file or directory) >>>>>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: >>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>> database connection failed (No such file or directory) >>>>>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: >>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>> database connection failed (No such file or directory) >>>>>>>>> (appears a lot). >>>>>>>>> >>>>>>>>> I also found on the ssh session of that, some sysv warnings >>>>>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>>>>> my understanding that it should continue to work with the other two >>>>>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>>>>> much faster. >>>>>>>>> >>>>>>>>> Gluster generates a bunch of different log files, I don't >>>>>>>>> know what ones you want, or from which machine(s). >>>>>>>>> >>>>>>>>> How do I do "volume profiling"? >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> >>>>>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> Do you see errors reported in the mount logs for the >>>>>>>>>> volume? If so, could you attach the logs? >>>>>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>>>>> output of volume profiling? >>>>>>>>>> >>>>>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>> >>>>>>>>>>> Ok, things have gotten MUCH worse this morning. I'm >>>>>>>>>>> getting random errors from VMs, right now, about a third of my VMs have >>>>>>>>>>> been paused due to storage issues, and most of the remaining VMs are not >>>>>>>>>>> performing well. >>>>>>>>>>> >>>>>>>>>>> At this point, I am in full EMERGENCY mode, as my >>>>>>>>>>> production services are now impacted, and I'm getting calls coming in with >>>>>>>>>>> problems... >>>>>>>>>>> >>>>>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>>>>> origional post). Is all this storage related? >>>>>>>>>>> >>>>>>>>>>> I also have two different gluster volumes for VM storage, >>>>>>>>>>> and only one had the issues, but now VMs in both are being affected at the >>>>>>>>>>> same time and same way. >>>>>>>>>>> >>>>>>>>>>> --Jim >>>>>>>>>>> >>>>>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>>>>> >>>>>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hello: >>>>>>>>>>>>> >>>>>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>>>>> >>>>>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I >>>>>>>>>>>>> upgraded the hosted engine first, then engine 3. When engine 3 came back >>>>>>>>>>>>> up, for some reason one of my gluster volumes would not sync. Here's >>>>>>>>>>>>> sample output: >>>>>>>>>>>>> >>>>>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>> >>>>>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>> >>>>>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>> >>>>>>>>>>>>> --------- >>>>>>>>>>>>> Its been in this state for a couple days now, and >>>>>>>>>>>>> bandwidth monitoring shows no appreciable data moving. I've tried >>>>>>>>>>>>> repeatedly commanding a full heal from all three clusters in the node. Its >>>>>>>>>>>>> always the same files that need healing. >>>>>>>>>>>>> >>>>>>>>>>>>> When running gluster volume heal data-hdd statistics, I >>>>>>>>>>>>> see sometimes different information, but always some number of "heal >>>>>>>>>>>>> failed" entries. It shows 0 for split brain. >>>>>>>>>>>>> >>>>>>>>>>>>> I'm not quite sure what to do. I suspect it may be due >>>>>>>>>>>>> to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>>>>> >>>>>>>>>>>>> Second issue: I've been experiencing VERY POOR >>>>>>>>>>>>> performance on most of my VMs. To the tune that logging into a windows 10 >>>>>>>>>>>>> vm via remote desktop can take 5 minutes, launching quickbooks inside said >>>>>>>>>>>>> vm can easily take 10 minutes. On some linux VMs, I get random messages >>>>>>>>>>>>> like this: >>>>>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup >>>>>>>>>>>>> - CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>>>>> >>>>>>>>>>>>> (the process and PID are often different) >>>>>>>>>>>>> >>>>>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks! >>>>>>>>>>>>> --Jim >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>>>>> vacy-policy/ >>>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>> y/about/community-guidelines/ >>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>>>>> S5MF/ >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/ >> >> >

Ok, finally fixed my ovirt engine, and its trying to run...but it turns out that its seeing 60-90% iowait, and is about to be killed off by the ha agent because it can't get fully online in time. I did make the changes recommended about locks, and it doesn't seem to have helped things any. The 3rd server (with the bad brick) is powered off at this point, so I have two servers, each with replicas of all volumes, and the 3rd server not even responding to pings... --Jim On Wed, May 30, 2018 at 1:53 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, made it through this one finally. Turns out there's a little log file "sanlock.log" that had the magic info in it...the ...dom_md/ids file got messed with in the gluster crash, had to recover that from a backup, then run the hosted-engine --reinitialize-lockspace --force, and then it started working again. Now, for some reason, my hosted engine starts up, but has no networking and the hosted-engine --console doesn't work either...Still looking into this now.
On Wed, May 30, 2018 at 1:29 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, right now I think this message from vdsm.log captures my blocking problem:
2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={u'c0acdefb-7d16-48ec-9d76-659b8fe33e2a': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.00142076', 'lastCheck': '1.1', 'valid': True}} from=::1,60684, task_id=7da641b6-9a98-4db7-aafb-04ff84228cf5 (api:52) 2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-05-30 01:27:23,411-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 01:27:23,411-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
The strange thing is I can't figure out what device has no space left. The only hits on google talk about deleting __DIRECT_IO_TEST__ file in the storage repo, which I tried, but it did not help. Perhaps one of the files in the engine volume stores locks, and that is full/corrupt?
--Jim
On Wed, May 30, 2018 at 12:25 AM, Jim Kusznir <jim@palousetech.com> wrote:
I tried to capture a consistent log slice accros the cluster from agent.log. Several systems claim they are starting the engine, but virsh never shows any sign of it. They also claim that a uuid that I'm pretty sure is not affiliated with my engine is not found.
--------- ovirt1: --------- MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent [root@ovirt1 vdsm]# tail -n 20 ../ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:14:02,154::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-30 00:14:02,173::states::776::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-05-30 00:14:02,174::state_decorators ::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-05-30 00:14:02,533::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-05-30 00:14:02,843::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:18:09,253::hosted_engine::1007:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-05-30 00:18:11,159::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:18:11,160::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:18:11,160::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:22:17,692::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt2.nwfiber.com (id 2): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=4348 (Wed May 30 00:16:46 2018)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=4349 (Wed May 30 00:16:47 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'ovirt2.nwfiber.com', 'alive': True, 'host-id': 2, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '5ee0a557', 'local_conf_timestamp': 4349, 'host-ts': 4348} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=100036 (Wed May 30 00:16:58 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=100037 (Wed May 30 00:16:58 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStart\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'e2fadf31', 'local_conf_timestamp': 100037, 'host-ts': 100036} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 1): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 30186.0, 'maintenance': False, 'cpu-load': 0.0308, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:22:17,714::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:22:18,309::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:22:18,624::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
---------- ovirt2 ---------- MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:08:34,545::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,627::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:12:40,643::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,827::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:16:46,635::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:16:46,636::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt1.nwfiber.com (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=7614 (Wed May 30 00:09:54 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=7614 (Wed May 30 00:09:54 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStarting\nstopped=False\n', 'hostname': 'ovirt1.nwfiber.com', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '1fa13e84', 'local_conf_timestamp': 7614, 'host-ts': 7614} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=99541 (Wed May 30 00:08:43 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=99542 (Wed May 30 00:08:43 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineForceStop\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '56f87e2c', 'local_conf_timestamp': 99542, 'host-ts': 99541} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'bridge': True, 'mem-free': 30891.0, 'maintenance': False, 'cpu-load': 0.0186, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:16:46,919::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUnexpectedlyDown-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:20:52,930::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:20:53,203::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:20:53,393::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
-------- ovirt3 ------- MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent [root@ovirt3 images]# tail -n 20 /var/log/ovirt-hosted-engine-h a/agent.log MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:12:51,793::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:12:52,157::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:16:58,237::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:16:58,256::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:16:58,595::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:16:58,816::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:21:05,082::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400) MainThread::INFO::2018-05-30 00:21:05,083::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:21:05,125::hosted_engine::9 68::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM MainThread::INFO::2018-05-30 00:21:05,141::hosted_engine::9 88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 80::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent
---------- note that virsh does not show engine started on this machine....
On Wed, May 30, 2018 at 12:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False ---------------
I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there:
[root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there.
On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users back]
Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/
Also the gluster client logs - under /var/log/glusterfs/rhev-data-c enter-mnt-glusterSD<volume>.log
On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> wrote:
> I believe the gluster data store for the engine is up and working > correctly. The rest are not mounted, as the engine hasn't managed to start > correctly yet. I did perform the copy-a-junk-file into the data store, > then check to ensure its there on another host, then delete that and see > that it disappeared on the first host; it passed that test. Here's the > info and status. (I have NOT performed the steps that Krutika and > Ravishankar suggested yet, as I don't have my data volumes working again > yet. > > [root@ovirt2 images]# gluster volume info > > Volume Name: data > Type: Replicate > Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick2/data > Brick2: ovirt2.nwfiber.com:/gluster/brick2/data > Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) > Options Reconfigured: > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > server.allow-insecure: on > performance.readdir-ahead: on > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > cluster.quorum-type: auto > cluster.server-quorum-type: server > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > features.shard-block-size: 512MB > performance.low-prio-threads: 32 > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 8 > network.ping-timeout: 30 > user.cifs: off > nfs.disable: on > performance.strict-o-direct: on > > Volume Name: data-hdd > Type: Replicate > Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: 172.172.1.11:/gluster/brick3/data-hdd > Brick2: 172.172.1.12:/gluster/brick3/data-hdd > Brick3: 172.172.1.13:/gluster/brick3/data-hdd > Options Reconfigured: > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > network.ping-timeout: 30 > server.allow-insecure: on > storage.owner-gid: 36 > storage.owner-uid: 36 > user.cifs: off > features.shard: on > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 8 > cluster.locking-scheme: granular > cluster.data-self-heal-algorithm: full > cluster.server-quorum-type: server > cluster.quorum-type: auto > cluster.eager-lock: enable > network.remote-dio: enable > performance.low-prio-threads: 32 > performance.stat-prefetch: off > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > transport.address-family: inet > performance.readdir-ahead: on > > Volume Name: engine > Type: Replicate > Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine > Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine > Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) > Options Reconfigured: > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > performance.readdir-ahead: on > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: off > cluster.quorum-type: auto > cluster.server-quorum-type: server > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > features.shard-block-size: 512MB > performance.low-prio-threads: 32 > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 6 > network.ping-timeout: 30 > user.cifs: off > nfs.disable: on > performance.strict-o-direct: on > > Volume Name: iso > Type: Replicate > Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso > Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso > Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) > Options Reconfigured: > performance.strict-o-direct: on > nfs.disable: on > user.cifs: off > network.ping-timeout: 30 > cluster.shd-max-threads: 6 > cluster.shd-wait-qlength: 10000 > cluster.locking-scheme: granular > cluster.data-self-heal-algorithm: full > performance.low-prio-threads: 32 > features.shard-block-size: 512MB > features.shard: on > storage.owner-gid: 36 > storage.owner-uid: 36 > cluster.server-quorum-type: server > cluster.quorum-type: auto > network.remote-dio: off > cluster.eager-lock: enable > performance.stat-prefetch: off > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > performance.readdir-ahead: on > > > [root@ovirt2 images]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt1.nwfiber.com:/gluster/brick2/da > ta 49152 0 Y > 3226 > Brick ovirt2.nwfiber.com:/gluster/brick2/da > ta 49152 0 Y > 2967 > Brick ovirt3.nwfiber.com:/gluster/brick2/da > ta 49152 0 Y > 2554 > Self-heal Daemon on localhost N/A N/A Y > 4818 > Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y > 17853 > Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y > 4771 > > Task Status of Volume data > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: data-hdd > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 Y > 3232 > Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 Y > 2976 > Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A N > N/A > NFS Server on localhost N/A N/A N > N/A > Self-heal Daemon on localhost N/A N/A Y > 4818 > NFS Server on ovirt3.nwfiber.com N/A N/A N > N/A > Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y > 17853 > NFS Server on ovirt1.nwfiber.com N/A N/A N > N/A > Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y > 4771 > > Task Status of Volume data-hdd > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: engine > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt1.nwfiber.com:/gluster/brick1/en > gine 49154 0 Y > 3239 > Brick ovirt2.nwfiber.com:/gluster/brick1/en > gine 49154 0 Y > 2982 > Brick ovirt3.nwfiber.com:/gluster/brick1/en > gine 49154 0 Y > 2578 > Self-heal Daemon on localhost N/A N/A Y > 4818 > Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y > 17853 > Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y > 4771 > > Task Status of Volume engine > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: iso > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt1.nwfiber.com:/gluster/brick4/is > o 49155 0 Y > 3247 > Brick ovirt2.nwfiber.com:/gluster/brick4/is > o 49155 0 Y > 2990 > Brick ovirt3.nwfiber.com:/gluster/brick4/is > o 49155 0 Y > 2580 > Self-heal Daemon on localhost N/A N/A Y > 4818 > Self-heal Daemon on ovirt3.nwfiber.com N/A N/A Y > 17853 > Self-heal Daemon on ovirt1.nwfiber.com N/A N/A Y > 4771 > > Task Status of Volume iso > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > > On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> > wrote: > >> >> >> On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> hosted-engine --deploy failed (would not come up on my existing >>> gluster storage). However, I realized no changes were written to my >>> existing storage. So, I went back to trying to get my old engine running. >>> >>> hosted-engine --vm-status is now taking a very long time >>> (5+minutes) to return, and it returns stail information everywhere. I >>> thought perhaps the lockspace is corrupt, so tried to clean that and >>> metadata, but both are failing (--cleam-metadata has hung and I can't even >>> ctrl-c out of it). >>> >>> How can I reinitialize all the lockspace/metadata safely? There >>> is no engine or VMs running currently.... >>> >> >> I think the first thing to make sure is that your storage is up and >> running. So can you mount the gluster volumes and able to access the >> contents there? >> Please provide the gluster volume info and gluster volume status of >> the volumes that you're using. >> >> >> >>> --Jim >>> >>> On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> >>> wrote: >>> >>>> Well, things went from bad to very, very bad.... >>>> >>>> It appears that during one of the 2 minute lockups, the fencing >>>> agents decided that another node in the cluster was down. As a result, 2 >>>> of the 3 nodes were simultaneously reset with fencing agent reboot. After >>>> the nodes came back up, the engine would not start. All running VMs >>>> (including VMs on the 3rd node that was not rebooted) crashed. >>>> >>>> I've now been working for about 3 hours trying to get the engine >>>> to come up. I don't know why it won't start. hosted-engine --vm-start >>>> says its starting, but it doesn't start (virsh doesn't show any VMs >>>> running). I'm currently running --deploy, as I had run out of options for >>>> anything else I can come up with. I hope this will allow me to re-import >>>> all my existing VMs and allow me to start them back up after everything >>>> comes back up. >>>> >>>> I do have an unverified geo-rep backup; I don't know if it is a >>>> good backup (there were several prior messages to this list, but I didn't >>>> get replies to my questions. It was running in what I believe to be >>>> "strange", and the data directories are larger than their source). >>>> >>>> I'll see if my --deploy works, and if not, I'll be back with >>>> another message/help request. >>>> >>>> When the dust settles and I'm at least minimally functional >>>> again, I really want to understand why all these technologies designed to >>>> offer redundancy conspired to reduce uptime and create failures where there >>>> weren't any otherwise. I thought with hosted engine, 3 ovirt servers and >>>> glusterfs with minimum replica 2+arb or replica 3 should have offered >>>> strong resilience against server failure or disk failure, and should have >>>> prevented / recovered from data corruption. Instead, all of the above >>>> happened (once I get my cluster back up, I still have to try and recover my >>>> webserver VM, which won't boot due to XFS corrupt journal issues created >>>> during the gluster crashes). I think a lot of these issues were rooted >>>> from the upgrade from 4.1 to 4.2. >>>> >>>> --Jim >>>> >>>> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> I also finally found the following in my system log on one >>>>> server: >>>>> >>>>> [10679.524491] INFO: task glusterclogro:14933 blocked for more >>>>> than 120 seconds. >>>>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >>>>> 1 0x00000080 >>>>> [10679.527150] Call Trace: >>>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [10679.527218] [<ffffffffc060e388>] >>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>> [xfs] >>>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [10679.527268] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>> [10679.527275] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [10679.527279] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for more >>>>> than 120 seconds. >>>>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >>>>> 1 0x00000080 >>>>> [10679.529961] Call Trace: >>>>> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [10679.530003] [<ffffffffc060e388>] >>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>> [xfs] >>>>> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [10679.530046] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>> [10679.530054] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [10679.530058] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >>>>> than 120 seconds. >>>>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >>>>> 1 0x00000080 >>>>> [10679.533738] Call Trace: >>>>> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [10679.533799] [<ffffffffc060e388>] >>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>> [xfs] >>>>> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [10679.533858] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>> [10679.533868] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [10679.533873] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [10919.512757] INFO: task glusterclogro:14933 blocked for more >>>>> than 120 seconds. >>>>> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >>>>> 1 0x00000080 >>>>> [10919.516677] Call Trace: >>>>> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>>> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >>>>> [10919.516768] [<ffffffffc05e9224>] ? >>>>> _xfs_buf_ioapply+0x334/0x460 [xfs] >>>>> [10919.516774] [<ffffffffb991432d>] >>>>> wait_for_completion+0xfd/0x140 >>>>> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 >>>>> [xfs] >>>>> [10919.516859] [<ffffffffc05eafa9>] >>>>> xfs_buf_submit_wait+0xf9/0x1d0 [xfs] >>>>> [10919.516902] [<ffffffffc061b279>] ? >>>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>>> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 >>>>> [xfs] >>>>> [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 >>>>> [xfs] >>>>> [10919.517022] [<ffffffffc061b279>] >>>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>>> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >>>>> [xfs] >>>>> [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 >>>>> [xfs] >>>>> [10919.517126] [<ffffffffc05c9fee>] >>>>> xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] >>>>> [10919.517160] [<ffffffffc05d5a1d>] >>>>> xfs_dir2_node_lookup+0x4d/0x170 [xfs] >>>>> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >>>>> [xfs] >>>>> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >>>>> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 >>>>> [xfs] >>>>> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >>>>> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >>>>> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >>>>> [10919.517296] [<ffffffffb94d3753>] ? >>>>> selinux_file_free_security+0x23/0x30 >>>>> [10919.517304] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [10919.517309] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [10919.517313] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [10919.517318] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >>>>> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >>>>> [10919.517333] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [10919.517339] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more >>>>> than 120 seconds. >>>>> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >>>>> 1 0x00000080 >>>>> [11159.498984] Call Trace: >>>>> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>>> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>>> [11159.499056] [<ffffffffc05dd9b7>] ? >>>>> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >>>>> [11159.499082] [<ffffffffc05dd43e>] ? >>>>> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >>>>> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >>>>> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>>> [11159.499097] [<ffffffffb991348d>] >>>>> io_schedule_timeout+0xad/0x130 >>>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>> [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 >>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>> [11159.499118] [<ffffffffb92bc210>] ? >>>>> wake_bit_function+0x40/0x40 >>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>>> [11159.499125] [<ffffffffb9394e85>] >>>>> grab_cache_page_write_begin+0x55/0xc0 >>>>> [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 >>>>> [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 >>>>> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>>> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >>>>> [11159.499149] [<ffffffffb9485621>] >>>>> iomap_file_buffered_write+0xa1/0xe0 >>>>> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>>> [11159.499182] [<ffffffffc05f025d>] >>>>> xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] >>>>> [11159.499213] [<ffffffffc05f057d>] >>>>> xfs_file_aio_write+0x18d/0x1b0 [xfs] >>>>> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>>> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>>> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>>> [11159.499230] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [11159.499234] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [11159.499238] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more >>>>> than 120 seconds. >>>>> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >>>>> 2 0x00000000 >>>>> [11279.491671] Call Trace: >>>>> [11279.491682] [<ffffffffb92a3a2e>] ? >>>>> try_to_del_timer_sync+0x5e/0x90 >>>>> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >>>>> [xfs] >>>>> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >>>>> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 >>>>> [xfs] >>>>> [11279.491849] [<ffffffffc0619e80>] ? >>>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>>> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >>>>> [11279.491913] [<ffffffffc0619e80>] ? >>>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>>> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >>>>> [11279.491926] [<ffffffffb92bb090>] ? >>>>> insert_kthread_work+0x40/0x40 >>>>> [11279.491932] [<ffffffffb9920677>] >>>>> ret_from_fork_nospec_begin+0x21/0x21 >>>>> [11279.491936] [<ffffffffb92bb090>] ? >>>>> insert_kthread_work+0x40/0x40 >>>>> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more >>>>> than 120 seconds. >>>>> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >>>>> 1 0x00000080 >>>>> [11279.494957] Call Trace: >>>>> [11279.494979] [<ffffffffc0309839>] ? >>>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>>> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>>> [11279.494997] [<ffffffffc0309d98>] ? >>>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>>> [11279.495001] [<ffffffffb991348d>] >>>>> io_schedule_timeout+0xad/0x130 >>>>> [11279.495005] [<ffffffffb99145ad>] >>>>> wait_for_completion_io+0xfd/0x140 >>>>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.495016] [<ffffffffb951e574>] >>>>> blkdev_issue_flush+0xb4/0x110 >>>>> [11279.495049] [<ffffffffc06064b9>] >>>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>>> [xfs] >>>>> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [11279.495090] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>> [11279.495098] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [11279.495102] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [11279.495105] INFO: task glusterposixfsy:14941 blocked for more >>>>> than 120 seconds. >>>>> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >>>>> 1 0x00000080 >>>>> [11279.498118] Call Trace: >>>>> [11279.498134] [<ffffffffc0309839>] ? >>>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>>> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 >>>>> [11279.498152] [<ffffffffc0309d98>] ? >>>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>>> [11279.498156] [<ffffffffb991348d>] >>>>> io_schedule_timeout+0xad/0x130 >>>>> [11279.498160] [<ffffffffb99145ad>] >>>>> wait_for_completion_io+0xfd/0x140 >>>>> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.498169] [<ffffffffb951e574>] >>>>> blkdev_issue_flush+0xb4/0x110 >>>>> [11279.498202] [<ffffffffc06064b9>] >>>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>>> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>>> [xfs] >>>>> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [11279.498242] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>> [11279.498250] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [11279.498254] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more >>>>> than 120 seconds. >>>>> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >>>>> 1 0x00000080 >>>>> [11279.501348] Call Trace: >>>>> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11279.501390] [<ffffffffc060e388>] >>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>> [xfs] >>>>> [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 >>>>> [11279.501461] [<ffffffffc05f0545>] >>>>> xfs_file_aio_write+0x155/0x1b0 [xfs] >>>>> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>>> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>>> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>>> [11279.501479] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [11279.501483] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [11279.501489] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more >>>>> than 120 seconds. >>>>> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>> disables this message. >>>>> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >>>>> 1 0x00000080 >>>>> [11279.504635] Call Trace: >>>>> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>> [11279.504676] [<ffffffffc060e388>] >>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>> [xfs] >>>>> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>> [11279.504718] [<ffffffffb992076f>] ? >>>>> system_call_after_swapgs+0xbc/0x160 >>>>> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>> [11279.504725] [<ffffffffb992082f>] >>>>> system_call_fastpath+0x1c/0x21 >>>>> [11279.504730] [<ffffffffb992077b>] ? >>>>> system_call_after_swapgs+0xc8/0x160 >>>>> [12127.466494] perf: interrupt took too long (8263 > 8150), >>>>> lowering kernel.perf_event_max_sample_rate to 24000 >>>>> >>>>> -------------------- >>>>> I think this is the cause of the massive ovirt performance >>>>> issues irrespective of gluster volume. At the time this happened, I was >>>>> also ssh'ed into the host, and was doing some rpm querry commands. I had >>>>> just run rpm -qa |grep glusterfs (to verify what version was actually >>>>> installed), and that command took almost 2 minutes to return! Normally it >>>>> takes less than 2 seconds. That is all pure local SSD IO, too.... >>>>> >>>>> I'm no expert, but its my understanding that anytime a software >>>>> causes these kinds of issues, its a serious bug in the software, even if >>>>> its mis-handled exceptions. Is this correct? >>>>> >>>>> --Jim >>>>> >>>>> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> I think this is the profile information for one of the volumes >>>>>> that lives on the SSDs and is fully operational with no down/problem disks: >>>>>> >>>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>>>>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>>>>> ---------------------------------------------- >>>>>> Cumulative Stats: >>>>>> Block Size: 256b+ 512b+ >>>>>> 1024b+ >>>>>> No. of Reads: 983 2696 >>>>>> 1059 >>>>>> No. of Writes: 0 1113 >>>>>> 302 >>>>>> >>>>>> Block Size: 2048b+ 4096b+ >>>>>> 8192b+ >>>>>> No. of Reads: 852 88608 >>>>>> 53526 >>>>>> No. of Writes: 522 812340 >>>>>> 76257 >>>>>> >>>>>> Block Size: 16384b+ 32768b+ >>>>>> 65536b+ >>>>>> No. of Reads: 54351 241901 >>>>>> 15024 >>>>>> No. of Writes: 21636 8656 >>>>>> 8976 >>>>>> >>>>>> Block Size: 131072b+ >>>>>> No. of Reads: 524156 >>>>>> No. of Writes: 296071 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 4189 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 1257 RELEASEDIR >>>>>> 0.00 46.19 us 12.00 us 187.00 us >>>>>> 69 FLUSH >>>>>> 0.00 147.00 us 78.00 us 367.00 us >>>>>> 86 REMOVEXATTR >>>>>> 0.00 223.46 us 24.00 us 1166.00 us >>>>>> 149 READDIR >>>>>> 0.00 565.34 us 76.00 us 3639.00 us >>>>>> 88 FTRUNCATE >>>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>>> 228 LK >>>>>> 0.00 98.84 us 2.00 us 880.00 us >>>>>> 1198 OPENDIR >>>>>> 0.00 91.59 us 26.00 us 10371.00 us >>>>>> 3853 STATFS >>>>>> 0.00 494.14 us 17.00 us 193439.00 us >>>>>> 1171 GETXATTR >>>>>> 0.00 299.42 us 35.00 us 9799.00 us >>>>>> 2044 READDIRP >>>>>> 0.00 1965.31 us 110.00 us 382258.00 us >>>>>> 321 XATTROP >>>>>> 0.01 113.40 us 24.00 us 61061.00 us >>>>>> 8134 STAT >>>>>> 0.01 755.38 us 57.00 us 607603.00 us >>>>>> 3196 DISCARD >>>>>> 0.05 2690.09 us 58.00 us 2704761.00 us >>>>>> 3206 OPEN >>>>>> 0.10 119978.25 us 97.00 us 9406684.00 us >>>>>> 154 SETATTR >>>>>> 0.18 101.73 us 28.00 us 700477.00 us >>>>>> 313379 FSTAT >>>>>> 0.23 1059.84 us 25.00 us 2716124.00 us >>>>>> 38255 LOOKUP >>>>>> 0.47 1024.11 us 54.00 us 6197164.00 us >>>>>> 81455 FXATTROP >>>>>> 1.72 2984.00 us 15.00 us 37098954.00 us >>>>>> 103020 FINODELK >>>>>> 5.92 44315.32 us 51.00 us 24731536.00 us >>>>>> 23957 FSYNC >>>>>> 13.27 2399.78 us 25.00 us 22089540.00 us >>>>>> 991005 READ >>>>>> 37.00 5980.43 us 52.00 us 22099889.00 us >>>>>> 1108976 WRITE >>>>>> 41.04 5452.75 us 13.00 us 22102452.00 us >>>>>> 1349053 INODELK >>>>>> >>>>>> Duration: 10026 seconds >>>>>> Data Read: 80046027759 bytes >>>>>> Data Written: 44496632320 bytes >>>>>> >>>>>> Interval 1 Stats: >>>>>> Block Size: 256b+ 512b+ >>>>>> 1024b+ >>>>>> No. of Reads: 983 2696 >>>>>> 1059 >>>>>> No. of Writes: 0 838 >>>>>> 185 >>>>>> >>>>>> Block Size: 2048b+ 4096b+ >>>>>> 8192b+ >>>>>> No. of Reads: 852 85856 >>>>>> 51575 >>>>>> No. of Writes: 382 705802 >>>>>> 57812 >>>>>> >>>>>> Block Size: 16384b+ 32768b+ >>>>>> 65536b+ >>>>>> No. of Reads: 52673 232093 >>>>>> 14984 >>>>>> No. of Writes: 13499 4908 >>>>>> 4242 >>>>>> >>>>>> Block Size: 131072b+ >>>>>> No. of Reads: 460040 >>>>>> No. of Writes: 6411 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 2093 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 1093 RELEASEDIR >>>>>> 0.00 53.38 us 26.00 us 111.00 us >>>>>> 16 FLUSH >>>>>> 0.00 145.14 us 78.00 us 367.00 us >>>>>> 71 REMOVEXATTR >>>>>> 0.00 190.96 us 114.00 us 298.00 us >>>>>> 71 SETATTR >>>>>> 0.00 213.38 us 24.00 us 1145.00 us >>>>>> 90 READDIR >>>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>>> 228 LK >>>>>> 0.00 101.76 us 2.00 us 880.00 us >>>>>> 1093 OPENDIR >>>>>> 0.01 93.60 us 27.00 us 10371.00 us >>>>>> 3090 STATFS >>>>>> 0.02 537.47 us 17.00 us 193439.00 us >>>>>> 1038 GETXATTR >>>>>> 0.03 297.44 us 35.00 us 9799.00 us >>>>>> 1990 READDIRP >>>>>> 0.03 2357.28 us 110.00 us 382258.00 us >>>>>> 253 XATTROP >>>>>> 0.04 385.93 us 58.00 us 47593.00 us >>>>>> 2091 OPEN >>>>>> 0.04 114.86 us 24.00 us 61061.00 us >>>>>> 7715 STAT >>>>>> 0.06 444.59 us 57.00 us 333240.00 us >>>>>> 3053 DISCARD >>>>>> 0.42 316.24 us 25.00 us 290728.00 us >>>>>> 29823 LOOKUP >>>>>> 0.73 257.92 us 54.00 us 344812.00 us >>>>>> 63296 FXATTROP >>>>>> 1.37 98.30 us 28.00 us 67621.00 us >>>>>> 313172 FSTAT >>>>>> 1.58 2124.69 us 51.00 us 849200.00 us >>>>>> 16717 FSYNC >>>>>> 5.73 162.46 us 52.00 us 748492.00 us >>>>>> 794079 WRITE >>>>>> 7.19 2065.17 us 16.00 us 37098954.00 us >>>>>> 78381 FINODELK >>>>>> 36.44 886.32 us 25.00 us 2216436.00 us >>>>>> 925421 READ >>>>>> 46.30 1178.04 us 13.00 us 1700704.00 us >>>>>> 884635 INODELK >>>>>> >>>>>> Duration: 7485 seconds >>>>>> Data Read: 71250527215 bytes >>>>>> Data Written: 5119903744 bytes >>>>>> >>>>>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>>>>> ---------------------------------------------- >>>>>> Cumulative Stats: >>>>>> Block Size: 1b+ >>>>>> No. of Reads: 0 >>>>>> No. of Writes: 3264419 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 90 FORGET >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 9462 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 4254 RELEASEDIR >>>>>> 0.00 50.52 us 13.00 us 190.00 us >>>>>> 71 FLUSH >>>>>> 0.00 186.97 us 87.00 us 713.00 us >>>>>> 86 REMOVEXATTR >>>>>> 0.00 79.32 us 33.00 us 189.00 us >>>>>> 228 LK >>>>>> 0.00 220.98 us 129.00 us 513.00 us >>>>>> 86 SETATTR >>>>>> 0.01 259.30 us 26.00 us 2632.00 us >>>>>> 137 READDIR >>>>>> 0.02 322.76 us 145.00 us 2125.00 us >>>>>> 321 XATTROP >>>>>> 0.03 109.55 us 2.00 us 1258.00 us >>>>>> 1193 OPENDIR >>>>>> 0.05 70.21 us 21.00 us 431.00 us >>>>>> 3196 DISCARD >>>>>> 0.05 169.26 us 21.00 us 2315.00 us >>>>>> 1545 GETXATTR >>>>>> 0.12 176.85 us 63.00 us 2844.00 us >>>>>> 3206 OPEN >>>>>> 0.61 303.49 us 90.00 us 3085.00 us >>>>>> 9633 FSTAT >>>>>> 2.44 305.66 us 28.00 us 3716.00 us >>>>>> 38230 LOOKUP >>>>>> 4.52 266.22 us 55.00 us 53424.00 us >>>>>> 81455 FXATTROP >>>>>> 6.96 1397.99 us 51.00 us 64822.00 us >>>>>> 23889 FSYNC >>>>>> 16.48 84.74 us 25.00 us 6917.00 us >>>>>> 932592 WRITE >>>>>> 30.16 106.90 us 13.00 us 3920189.00 us >>>>>> 1353046 INODELK >>>>>> 38.55 1794.52 us 14.00 us 16210553.00 us >>>>>> 103039 FINODELK >>>>>> >>>>>> Duration: 66562 seconds >>>>>> Data Read: 0 bytes >>>>>> Data Written: 3264419 bytes >>>>>> >>>>>> Interval 1 Stats: >>>>>> Block Size: 1b+ >>>>>> No. of Reads: 0 >>>>>> No. of Writes: 794080 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 2093 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 1093 RELEASEDIR >>>>>> 0.00 70.31 us 26.00 us 125.00 us >>>>>> 16 FLUSH >>>>>> 0.00 193.10 us 103.00 us 713.00 us >>>>>> 71 REMOVEXATTR >>>>>> 0.01 227.32 us 133.00 us 513.00 us >>>>>> 71 SETATTR >>>>>> 0.01 79.32 us 33.00 us 189.00 us >>>>>> 228 LK >>>>>> 0.01 259.83 us 35.00 us 1138.00 us >>>>>> 89 READDIR >>>>>> 0.03 318.26 us 145.00 us 2047.00 us >>>>>> 253 XATTROP >>>>>> 0.04 112.67 us 3.00 us 1258.00 us >>>>>> 1093 OPENDIR >>>>>> 0.06 167.98 us 23.00 us 1951.00 us >>>>>> 1014 GETXATTR >>>>>> 0.08 70.97 us 22.00 us 431.00 us >>>>>> 3053 DISCARD >>>>>> 0.13 183.78 us 66.00 us 2844.00 us >>>>>> 2091 OPEN >>>>>> 1.01 303.82 us 90.00 us 3085.00 us >>>>>> 9610 FSTAT >>>>>> 3.27 316.59 us 30.00 us 3716.00 us >>>>>> 29820 LOOKUP >>>>>> 5.83 265.79 us 59.00 us 53424.00 us >>>>>> 63296 FXATTROP >>>>>> 7.95 1373.89 us 51.00 us 64822.00 us >>>>>> 16717 FSYNC >>>>>> 23.17 851.99 us 14.00 us 16210553.00 us >>>>>> 78555 FINODELK >>>>>> 24.04 87.44 us 27.00 us 6917.00 us >>>>>> 794081 WRITE >>>>>> 34.36 111.91 us 14.00 us 984871.00 us >>>>>> 886790 INODELK >>>>>> >>>>>> Duration: 7485 seconds >>>>>> Data Read: 0 bytes >>>>>> Data Written: 794080 bytes >>>>>> >>>>>> >>>>>> ----------------------- >>>>>> Here is the data from the volume that is backed by the SHDDs >>>>>> and has one failed disk: >>>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info >>>>>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>>>>> -------------------------------------------- >>>>>> Cumulative Stats: >>>>>> Block Size: 256b+ 512b+ >>>>>> 1024b+ >>>>>> No. of Reads: 1702 86 >>>>>> 16 >>>>>> No. of Writes: 0 767 >>>>>> 71 >>>>>> >>>>>> Block Size: 2048b+ 4096b+ >>>>>> 8192b+ >>>>>> No. of Reads: 19 51841 >>>>>> 2049 >>>>>> No. of Writes: 76 60668 >>>>>> 35727 >>>>>> >>>>>> Block Size: 16384b+ 32768b+ >>>>>> 65536b+ >>>>>> No. of Reads: 1744 639 >>>>>> 1088 >>>>>> No. of Writes: 8524 2410 >>>>>> 1285 >>>>>> >>>>>> Block Size: 131072b+ >>>>>> No. of Reads: 771999 >>>>>> No. of Writes: 29584 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 2902 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 1517 RELEASEDIR >>>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>>> 1 FTRUNCATE >>>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>>> 51 FLUSH >>>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>>> 57 REMOVEXATTR >>>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>>> 60 SETATTR >>>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>>> 555 LK >>>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>>> 138 READDIR >>>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>>> 237 XATTROP >>>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>>> 3469 STATFS >>>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>>> 1467 OPENDIR >>>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>>> 4454 STAT >>>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>>> 1902 GETXATTR >>>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>>> 1364 ENTRYLK >>>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>>> 1064 READDIRP >>>>>> 0.07 85.74 us 19.00 us 68340.00 us >>>>>> 62849 FSTAT >>>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>>> 2729 OPEN >>>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>>> 25447 LOOKUP >>>>>> 5.94 2331.74 us 15.00 us 1099530.00 us >>>>>> 207534 FINODELK >>>>>> 7.31 8311.75 us 58.00 us 1800216.00 us >>>>>> 71668 FXATTROP >>>>>> 12.49 7735.19 us 51.00 us 3595513.00 us >>>>>> 131642 WRITE >>>>>> 17.70 957.08 us 16.00 us 13700466.00 us >>>>>> 1508160 INODELK >>>>>> 24.55 2546.43 us 26.00 us 5077347.00 us >>>>>> 786060 READ >>>>>> 31.56 49699.15 us 47.00 us 3746331.00 us >>>>>> 51777 FSYNC >>>>>> >>>>>> Duration: 10101 seconds >>>>>> Data Read: 101562897361 bytes >>>>>> Data Written: 4834450432 bytes >>>>>> >>>>>> Interval 0 Stats: >>>>>> Block Size: 256b+ 512b+ >>>>>> 1024b+ >>>>>> No. of Reads: 1702 86 >>>>>> 16 >>>>>> No. of Writes: 0 767 >>>>>> 71 >>>>>> >>>>>> Block Size: 2048b+ 4096b+ >>>>>> 8192b+ >>>>>> No. of Reads: 19 51841 >>>>>> 2049 >>>>>> No. of Writes: 76 60668 >>>>>> 35727 >>>>>> >>>>>> Block Size: 16384b+ 32768b+ >>>>>> 65536b+ >>>>>> No. of Reads: 1744 639 >>>>>> 1088 >>>>>> No. of Writes: 8524 2410 >>>>>> 1285 >>>>>> >>>>>> Block Size: 131072b+ >>>>>> No. of Reads: 771999 >>>>>> No. of Writes: 29584 >>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>> calls Fop >>>>>> --------- ----------- ----------- ----------- >>>>>> ------------ ---- >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 2902 RELEASE >>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>> 1517 RELEASEDIR >>>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>>> 1 FTRUNCATE >>>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>>> 51 FLUSH >>>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>>> 57 REMOVEXATTR >>>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>>> 60 SETATTR >>>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>>> 555 LK >>>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>>> 138 READDIR >>>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>>> 237 XATTROP >>>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>>> 3469 STATFS >>>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>>> 1467 OPENDIR >>>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>>> 4454 STAT >>>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>>> 1902 GETXATTR >>>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>>> 1364 ENTRYLK >>>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>>> 1064 READDIRP >>>>>> 0.07 85.73 us 19.00 us 68340.00 us >>>>>> 62849 FSTAT >>>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>>> 2729 OPEN >>>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>>> 25447 LOOKUP >>>>>> 5.94 2334.57 us 15.00 us 1099530.00 us >>>>>> 207534 FINODELK >>>>>> 7.31 8311.49 us 58.00 us 1800216.00 us >>>>>> 71668 FXATTROP >>>>>> 12.49 7735.32 us 51.00 us 3595513.00 us >>>>>> 131642 WRITE >>>>>> 17.71 957.08 us 16.00 us 13700466.00 us >>>>>> 1508160 INODELK >>>>>> 24.56 2546.42 us 26.00 us 5077347.00 us >>>>>> 786060 READ >>>>>> 31.54 49651.63 us 47.00 us 3746331.00 us >>>>>> 51777 FSYNC >>>>>> >>>>>> Duration: 10101 seconds >>>>>> Data Read: 101562897361 bytes >>>>>> Data Written: 4834450432 bytes >>>>>> >>>>>> >>>>>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Thank you for your response. >>>>>>> >>>>>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. >>>>>>> replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th >>>>>>> volume is replica 3, with a brick on all three ovirt machines. >>>>>>> >>>>>>> The first 3 volumes are on an SSD disk; the 4th is on a >>>>>>> Seagate SSHD (same in all three machines). On ovirt3, the SSHD has >>>>>>> reported hard IO failures, and that brick is offline. However, the other >>>>>>> two replicas are fully operational (although they still show contents in >>>>>>> the heal info command that won't go away, but that may be the case until I >>>>>>> replace the failed disk). >>>>>>> >>>>>>> What is bothering me is that ALL 4 gluster volumes are showing >>>>>>> horrible performance issues. At this point, as the bad disk has been >>>>>>> completely offlined, I would expect gluster to perform at normal speed, but >>>>>>> that is definitely not the case. >>>>>>> >>>>>>> I've also noticed that the performance hits seem to come in >>>>>>> waves: things seem to work acceptably (but slow) for a while, then >>>>>>> suddenly, its as if all disk IO on all volumes (including non-gluster local >>>>>>> OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes >>>>>>> again. During those times, I start getting VM not responding and host not >>>>>>> responding notices as well as the applications having major issues. >>>>>>> >>>>>>> I've shut down most of my VMs and am down to just my essential >>>>>>> core VMs (shedded about 75% of my VMs). I still am experiencing the same >>>>>>> issues. >>>>>>> >>>>>>> Am I correct in believing that once the failed disk was >>>>>>> brought offline that performance should return to normal? >>>>>>> >>>>>>> On Tue, May 29, 2018 at 1:27 PM, Alex K < >>>>>>> rightkicktech@gmail.com> wrote: >>>>>>> >>>>>>>> I would check disks status and accessibility of mount points >>>>>>>> where your gluster volumes reside. >>>>>>>> >>>>>>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> On one ovirt server, I'm now seeing these messages: >>>>>>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>>>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 0 >>>>>>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905945472 >>>>>>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905945584 >>>>>>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 2048 >>>>>>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905943424 >>>>>>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905943536 >>>>>>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 0 >>>>>>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905945472 >>>>>>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 3905945584 >>>>>>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, >>>>>>>>> sector 2048 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>> >>>>>>>>>> I see in messages on ovirt3 (my 3rd machine, the one >>>>>>>>>> upgraded to 4.2): >>>>>>>>>> >>>>>>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: >>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: >>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: >>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>> (appears a lot). >>>>>>>>>> >>>>>>>>>> I also found on the ssh session of that, some sysv warnings >>>>>>>>>> about the backing disk for one of the gluster volumes (straight replica >>>>>>>>>> 3). The glusterfs process for that disk on that machine went offline. Its >>>>>>>>>> my understanding that it should continue to work with the other two >>>>>>>>>> machines while I attempt to replace that disk, right? Attempted writes >>>>>>>>>> (touching an empty file) can take 15 seconds, repeating it later will be >>>>>>>>>> much faster. >>>>>>>>>> >>>>>>>>>> Gluster generates a bunch of different log files, I don't >>>>>>>>>> know what ones you want, or from which machine(s). >>>>>>>>>> >>>>>>>>>> How do I do "volume profiling"? >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> >>>>>>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> Do you see errors reported in the mount logs for the >>>>>>>>>>> volume? If so, could you attach the logs? >>>>>>>>>>> Any issues with your underlying disks. Can you also attach >>>>>>>>>>> output of volume profiling? >>>>>>>>>>> >>>>>>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Ok, things have gotten MUCH worse this morning. I'm >>>>>>>>>>>> getting random errors from VMs, right now, about a third of my VMs have >>>>>>>>>>>> been paused due to storage issues, and most of the remaining VMs are not >>>>>>>>>>>> performing well. >>>>>>>>>>>> >>>>>>>>>>>> At this point, I am in full EMERGENCY mode, as my >>>>>>>>>>>> production services are now impacted, and I'm getting calls coming in with >>>>>>>>>>>> problems... >>>>>>>>>>>> >>>>>>>>>>>> I'd greatly appreciate help...VMs are running VERY slowly >>>>>>>>>>>> (when they run), and they are steadily getting worse. I don't know why. I >>>>>>>>>>>> was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>>>>>>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>>>>>> origional post). Is all this storage related? >>>>>>>>>>>> >>>>>>>>>>>> I also have two different gluster volumes for VM storage, >>>>>>>>>>>> and only one had the issues, but now VMs in both are being affected at the >>>>>>>>>>>> same time and same way. >>>>>>>>>>>> >>>>>>>>>>>> --Jim >>>>>>>>>>>> >>>>>>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hello: >>>>>>>>>>>>>> >>>>>>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>>>>>> >>>>>>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I >>>>>>>>>>>>>> upgraded the hosted engine first, then engine 3. When engine 3 came back >>>>>>>>>>>>>> up, for some reason one of my gluster volumes would not sync. Here's >>>>>>>>>>>>>> sample output: >>>>>>>>>>>>>> >>>>>>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>> >>>>>>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>> >>>>>>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>> >>>>>>>>>>>>>> --------- >>>>>>>>>>>>>> Its been in this state for a couple days now, and >>>>>>>>>>>>>> bandwidth monitoring shows no appreciable data moving. I've tried >>>>>>>>>>>>>> repeatedly commanding a full heal from all three clusters in the node. Its >>>>>>>>>>>>>> always the same files that need healing. >>>>>>>>>>>>>> >>>>>>>>>>>>>> When running gluster volume heal data-hdd statistics, I >>>>>>>>>>>>>> see sometimes different information, but always some number of "heal >>>>>>>>>>>>>> failed" entries. It shows 0 for split brain. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not quite sure what to do. I suspect it may be due >>>>>>>>>>>>>> to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>>>>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>>>>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Second issue: I've been experiencing VERY POOR >>>>>>>>>>>>>> performance on most of my VMs. To the tune that logging into a windows 10 >>>>>>>>>>>>>> vm via remote desktop can take 5 minutes, launching quickbooks inside said >>>>>>>>>>>>>> vm can easily take 10 minutes. On some linux VMs, I get random messages >>>>>>>>>>>>>> like this: >>>>>>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup >>>>>>>>>>>>>> - CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>>>>>> >>>>>>>>>>>>>> (the process and PID are often different) >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>> --Jim >>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>>>>>> vacy-policy/ >>>>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>> vacy-policy/ >>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>> y/about/community-guidelines/ >>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDE >>>>>>>>> S5MF/ >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/ >>> >>> >> >

Thanks for those who helped throughout my night as I tried to restore my services before customers noticed in full force this AM. As of now, I have my main core VM back up and running; all others are still offline. I wouldn't consider my cluster fixed, but at least the all out disaster has been avoided. I still have major gluster performance issues. I still see spikes of iowat...And this is with only hosted engine and one other VM running! My ovirt1 node appears to be operational, but the engine is unable to activate it for some unknown reason. It simply reports failure activating that engine My storage domains are not showing up, even though the volumes are showing up. My datacenter is consequently "down". Don't know how to fix this; the domains (where I think I'm supposed to start the storage domain) doesn't have a start. In the volumes screen, it shows them online (with one brick missing in each); in domains, it shows two up and two down, and in data center and cluster, it shows "down". In my system logs on the (only) ovirt engine that is live and operational (the first engine above is functioning for gluster purposes), I see these errors: May 30 02:45:22 ovirt2 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 30 02:45:32 ovirt2 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) every 10 seconds. Below is the profile data for my volume "data": [root@ovirt2 log]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 209 2502 1466 No. of Writes: 0 157 29 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 1337 39753 9658 No. of Writes: 36 17010 4264 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 10039 11677 12596 No. of Writes: 1497 878 1031 Block Size: 131072b+ No. of Reads: 13096 No. of Writes: 5895 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1485 RELEASE 0.00 0.00 us 0.00 us 0.00 us 840 RELEASEDIR 0.00 108.00 us 108.00 us 108.00 us 1 FLUSH 0.00 59.41 us 20.00 us 217.00 us 116 LK 0.01 151.54 us 87.00 us 407.00 us 127 REMOVEXATTR 0.01 192.17 us 105.00 us 472.00 us 127 SETATTR 0.02 344.84 us 26.00 us 1301.00 us 114 READDIR 0.03 90.46 us 31.00 us 575.00 us 561 STATFS 0.04 99.05 us 1.00 us 553.00 us 840 OPENDIR 0.06 77.94 us 21.00 us 437.00 us 1418 STAT 0.06 229.57 us 114.00 us 2476.00 us 508 XATTROP 0.08 174.73 us 18.00 us 2958.00 us 845 GETXATTR 0.15 195.09 us 58.00 us 35358.00 us 1490 OPEN 0.22 242.83 us 71.00 us 22170.00 us 1719 DISCARD 0.23 312.22 us 32.00 us 5352.00 us 1421 READDIRP 0.39 133.23 us 14.00 us 35528.00 us 5592 INODELK 1.13 279.87 us 20.00 us 18159.00 us 7795 LOOKUP 3.06 190.95 us 65.00 us 60746.00 us 30797 WRITE 3.52 63.43 us 14.00 us 15944.00 us 106735 FINODELK 4.34 88.53 us 27.00 us 27767.00 us 94347 FSTAT 9.09 179.41 us 58.00 us 27779.00 us 97455 FXATTROP 23.48 1386.41 us 47.00 us 1068274.00 us 32590 FSYNC 54.09 1017.11 us 30.00 us 620704.00 us 102333 READ Duration: 8149 seconds Data Read: 3675474396 bytes Data Written: 1051264512 bytes Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 209 2502 1466 No. of Writes: 0 157 29 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 1337 39753 9658 No. of Writes: 36 17010 4264 Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 10039 11677 12596 No. of Writes: 1497 878 1031 Block Size: 131072b+ No. of Reads: 13096 No. of Writes: 5895 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1485 RELEASE 0.00 0.00 us 0.00 us 0.00 us 840 RELEASEDIR 0.00 108.00 us 108.00 us 108.00 us 1 FLUSH 0.00 59.41 us 20.00 us 217.00 us 116 LK 0.01 151.54 us 87.00 us 407.00 us 127 REMOVEXATTR 0.01 192.17 us 105.00 us 472.00 us 127 SETATTR 0.02 344.84 us 26.00 us 1301.00 us 114 READDIR 0.03 90.46 us 31.00 us 575.00 us 561 STATFS 0.04 99.05 us 1.00 us 553.00 us 840 OPENDIR 0.06 77.94 us 21.00 us 437.00 us 1418 STAT 0.06 229.57 us 114.00 us 2476.00 us 508 XATTROP 0.08 174.73 us 18.00 us 2958.00 us 845 GETXATTR 0.15 195.09 us 58.00 us 35358.00 us 1490 OPEN 0.22 242.83 us 71.00 us 22170.00 us 1719 DISCARD 0.23 312.22 us 32.00 us 5352.00 us 1421 READDIRP 0.39 133.23 us 14.00 us 35528.00 us 5592 INODELK 1.14 280.34 us 20.00 us 18159.00 us 7795 LOOKUP 3.06 190.95 us 65.00 us 60746.00 us 30797 WRITE 3.52 63.43 us 14.00 us 15944.00 us 106735 FINODELK 4.34 88.54 us 27.00 us 27767.00 us 94347 FSTAT 9.09 179.41 us 58.00 us 27779.00 us 97455 FXATTROP 23.48 1386.41 us 47.00 us 1068274.00 us 32590 FSYNC 54.09 1017.23 us 30.00 us 620704.00 us 102333 READ Duration: 8149 seconds Data Read: 3675474396 bytes Data Written: 1051264512 bytes Brick: ovirt1.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 512b+ 1024b+ 2048b+ No. of Reads: 0 0 0 No. of Writes: 157 29 36 Block Size: 4096b+ 8192b+ 16384b+ No. of Reads: 0 0 0 No. of Writes: 17010 4264 1497 Block Size: 32768b+ 65536b+ 131072b+ No. of Reads: 0 0 0 No. of Writes: 878 1031 5895 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1485 RELEASE 0.00 0.00 us 0.00 us 0.00 us 840 RELEASEDIR 0.00 134.00 us 134.00 us 134.00 us 1 FLUSH 0.00 63.33 us 29.00 us 306.00 us 116 LK 0.00 235.20 us 128.00 us 401.00 us 56 FSTAT 0.00 179.92 us 101.00 us 322.00 us 127 REMOVEXATTR 0.00 485.89 us 36.00 us 2113.00 us 114 READDIR 0.00 106.76 us 45.00 us 349.00 us 561 STATFS 0.00 173.48 us 2.00 us 44827.00 us 840 OPENDIR 0.00 305.77 us 168.00 us 17701.00 us 508 XATTROP 0.01 484.49 us 22.00 us 58354.00 us 847 GETXATTR 0.06 1916.07 us 75.00 us 453071.00 us 1490 OPEN 0.12 3559.08 us 87.00 us 2330488.00 us 1719 DISCARD 0.20 81133.29 us 141.00 us 2262803.00 us 127 SETATTR 0.30 2750.99 us 23.00 us 2135624.00 us 5592 INODELK 0.73 4738.22 us 27.00 us 2513893.00 us 7797 LOOKUP 1.72 896.88 us 62.00 us 4370315.00 us 97455 FXATTROP 2.49 4096.17 us 85.00 us 6057981.00 us 30797 WRITE 33.69 15998.68 us 22.00 us 8926733.00 us 106723 FINODELK 60.67 94348.34 us 63.00 us 7949881.00 us 32589 FSYNC Duration: 8148 seconds Data Read: 0 bytes Data Written: 1051264512 bytes Interval 0 Stats: Block Size: 512b+ 1024b+ 2048b+ No. of Reads: 0 0 0 No. of Writes: 157 29 36 Block Size: 4096b+ 8192b+ 16384b+ No. of Reads: 0 0 0 No. of Writes: 17010 4264 1497 Block Size: 32768b+ 65536b+ 131072b+ No. of Reads: 0 0 0 No. of Writes: 878 1031 5895 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1485 RELEASE 0.00 0.00 us 0.00 us 0.00 us 840 RELEASEDIR 0.00 134.00 us 134.00 us 134.00 us 1 FLUSH 0.00 63.33 us 29.00 us 306.00 us 116 LK 0.00 235.20 us 128.00 us 401.00 us 56 FSTAT 0.00 179.92 us 101.00 us 322.00 us 127 REMOVEXATTR 0.00 485.89 us 36.00 us 2113.00 us 114 READDIR 0.00 106.76 us 45.00 us 349.00 us 561 STATFS 0.00 173.48 us 2.00 us 44827.00 us 840 OPENDIR 0.00 305.77 us 168.00 us 17701.00 us 508 XATTROP 0.01 484.49 us 22.00 us 58354.00 us 847 GETXATTR 0.06 1916.07 us 75.00 us 453071.00 us 1490 OPEN 0.12 3559.08 us 87.00 us 2330488.00 us 1719 DISCARD 0.20 81133.29 us 141.00 us 2262803.00 us 127 SETATTR 0.30 2750.99 us 23.00 us 2135624.00 us 5592 INODELK 0.75 4864.77 us 27.00 us 2513893.00 us 7797 LOOKUP 1.72 896.88 us 62.00 us 4370315.00 us 97455 FXATTROP 2.49 4096.17 us 85.00 us 6057981.00 us 30797 WRITE 33.68 15997.69 us 22.00 us 8926733.00 us 106723 FINODELK 60.66 94348.34 us 63.00 us 7949881.00 us 32589 FSYNC Duration: 8148 seconds Data Read: 0 bytes Data Written: 1051264512 bytes --------------- volume info data -------------- [root@ovirt2 log]# gluster volume info data Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on server.allow-insecure: on geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on diagnostics.latency-measurement: on diagnostics.count-fop-hits: on On Wed, May 30, 2018 at 2:06 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, finally fixed my ovirt engine, and its trying to run...but it turns out that its seeing 60-90% iowait, and is about to be killed off by the ha agent because it can't get fully online in time. I did make the changes recommended about locks, and it doesn't seem to have helped things any.
The 3rd server (with the bad brick) is powered off at this point, so I have two servers, each with replicas of all volumes, and the 3rd server not even responding to pings...
--Jim
On Wed, May 30, 2018 at 1:53 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, made it through this one finally. Turns out there's a little log file "sanlock.log" that had the magic info in it...the ...dom_md/ids file got messed with in the gluster crash, had to recover that from a backup, then run the hosted-engine --reinitialize-lockspace --force, and then it started working again. Now, for some reason, my hosted engine starts up, but has no networking and the hosted-engine --console doesn't work either...Still looking into this now.
On Wed, May 30, 2018 at 1:29 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, right now I think this message from vdsm.log captures my blocking problem:
2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={u'c0acdefb-7d16-48ec-9d76-659b8fe33e2a': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.00142076', 'lastCheck': '1.1', 'valid': True}} from=::1,60684, task_id=7da641b6-9a98-4db7-aafb-04ff84228cf5 (api:52) 2018-05-30 01:27:22,326-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-05-30 01:27:23,411-0700 ERROR (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') The vm start process failed (vm:943) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Failed to acquire lock: No space left on device 2018-05-30 01:27:23,411-0700 INFO (vm/50392390) [virt.vm] (vmId='50392390-4f89-435c-bf3b-8254b58a4ef7') Changed state to Down: Failed to acquire lock: No space left on device (code=1) (vm:1683)
The strange thing is I can't figure out what device has no space left. The only hits on google talk about deleting __DIRECT_IO_TEST__ file in the storage repo, which I tried, but it did not help. Perhaps one of the files in the engine volume stores locks, and that is full/corrupt?
--Jim
On Wed, May 30, 2018 at 12:25 AM, Jim Kusznir <jim@palousetech.com> wrote:
I tried to capture a consistent log slice accros the cluster from agent.log. Several systems claim they are starting the engine, but virsh never shows any sign of it. They also claim that a uuid that I'm pretty sure is not affiliated with my engine is not found.
--------- ovirt1: --------- MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent [root@ovirt1 vdsm]# tail -n 20 ../ovirt-hosted-engine-ha/agent.log MainThread::INFO::2018-05-30 00:14:02,154::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400) MainThread::INFO::2018-05-30 00:14:02,173::states::776::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Another host already took over.. MainThread::INFO::2018-05-30 00:14:02,174::state_decorators ::92::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineForceStop'> MainThread::INFO::2018-05-30 00:14:02,533::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStarting-EngineForceStop) sent? sent MainThread::INFO::2018-05-30 00:14:02,843::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineForceStop (score: 3400) MainThread::INFO::2018-05-30 00:18:09,236::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:18:09,253::hosted_engine::1007:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Shutting down vm using `/usr/sbin/hosted-engine --vm-poweroff` MainThread::INFO::2018-05-30 00:18:11,159::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:18:11,160::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:18:11,160::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:18:11,628::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:22:17,692::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt2.nwfiber.com (id 2): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=4348 (Wed May 30 00:16:46 2018)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=4349 (Wed May 30 00:16:47 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineDown\nstopped=False\n', 'hostname': 'ovirt2.nwfiber.com', 'alive': True, 'host-id': 2, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '5ee0a557', 'local_conf_timestamp': 4349, 'host-ts': 4348} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=100036 (Wed May 30 00:16:58 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=100037 (Wed May 30 00:16:58 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStart\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': 'e2fadf31', 'local_conf_timestamp': 100037, 'host-ts': 100036} MainThread::INFO::2018-05-30 00:22:17,713::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 1): {'engine-health': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': True, 'mem-free': 30186.0, 'maintenance': False, 'cpu-load': 0.0308, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:22:17,714::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:22:18,309::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:22:18,624::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
---------- ovirt2 ---------- MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:08:34,534::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt3.nwfiber.com (id: 3, score: 3400) MainThread::INFO::2018-05-30 00:08:34,545::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,627::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:12:40,628::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:12:40,643::states::678::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down, local host does not have best score MainThread::INFO::2018-05-30 00:12:40,827::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:16:46,635::states::693::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Wed May 30 00:04:28 2018 MainThread::INFO::2018-05-30 00:16:46,636::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineUnexpectedlyDown (score: 0) MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 69::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Global metadata: {'maintenance': False} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt1.nwfiber.com (id 1): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=7614 (Wed May 30 00:09:54 2018)\nhost-id=1\nscore=3400\nvm_conf_refresh_time=7614 (Wed May 30 00:09:54 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineStarting\nstopped=False\n', 'hostname': 'ovirt1.nwfiber.com', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '1fa13e84', 'local_conf_timestamp': 7614, 'host-ts': 7614} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 74::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Host ovirt3.nwfiber.com (id 3): {'conf_on_shared_storage': True, 'extra': 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=99541 (Wed May 30 00:08:43 2018)\nhost-id=3\nscore=3400\nvm_conf_refresh_time=99542 (Wed May 30 00:08:43 2018)\nconf_on_shared_storage= True\nmaintenance=False\nstate=EngineForceStop\nstopped=False\n', 'hostname': 'ovirt3.nwfiber.com', 'alive': True, 'host-id': 3, 'engine-status': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down', 'detail': 'Down'}, 'score': 3400, 'stopped': False, 'maintenance': False, 'crc32': '56f87e2c', 'local_conf_timestamp': 99542, 'host-ts': 99541} MainThread::INFO::2018-05-30 00:16:46,648::state_machine::1 77::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'bad vm status', 'health': 'bad', 'vm': 'down_unexpected', 'detail': 'Down'}, 'bridge': True, 'mem-free': 30891.0, 'maintenance': False, 'cpu-load': 0.0186, 'gateway': 1.0, 'storage-domain': True} MainThread::INFO::2018-05-30 00:16:46,919::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineUnexpectedlyDown-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:20:52,918::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:20:52,930::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:20:53,203::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:20:53,393::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
-------- ovirt3 ------- MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent [root@ovirt3 images]# tail -n 20 /var/log/ovirt-hosted-engine-h a/agent.log MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1012:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stdout: MainThread::INFO::2018-05-30 00:12:51,792::hosted_engine::1013:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) stderr: MainThread::ERROR::2018-05-30 00:12:51,793::hosted_engine::1021:: ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm) Engine VM stopped on localhost MainThread::INFO::2018-05-30 00:12:52,157::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineForceStop-EngineDown) sent? sent MainThread::INFO::2018-05-30 00:16:58,237::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineDown (score: 3400) MainThread::INFO::2018-05-30 00:16:58,256::states::508::ovi rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine down and local host has best score (3400), attempting to start engine VM MainThread::INFO::2018-05-30 00:16:58,595::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineDown-EngineStart) sent? sent MainThread::INFO::2018-05-30 00:16:58,816::ovf_store::151:: ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /var/run/vdsm/storage/c0acdefb -7d16-48ec-9d76-659b8fe33e2a/f22829ab-9fd5-415a-9a8f-809d3f7 887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929 MainThread::INFO::2018-05-30 00:21:05,082::hosted_engine::4 91::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state EngineStart (score: 3400) MainThread::INFO::2018-05-30 00:21:05,083::hosted_engine::4 99::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Best remote host ovirt1.nwfiber.com (id: 1, score: 3400) MainThread::INFO::2018-05-30 00:21:05,125::hosted_engine::9 68::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM MainThread::INFO::2018-05-30 00:21:05,141::hosted_engine::9 88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Cleaning state for non-running VM MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 80::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean MainThread::INFO::2018-05-30 00:21:07,343::hosted_engine::9 29::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start` MainThread::INFO::2018-05-30 00:21:08,107::hosted_engine::9 35::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 36::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
MainThread::INFO::2018-05-30 00:21:08,108::hosted_engine::9 48::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost MainThread::INFO::2018-05-30 00:21:08,413::brokerlink::68:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent
---------- note that virsh does not show engine started on this machine....
On Wed, May 30, 2018 at 12:16 AM, Jim Kusznir <jim@palousetech.com> wrote:
So, on ovirt2, I ran the hosted-engine --vm-status and it took over 2 minutes to get it to return, but two of the 3 machines had current (non-stale) data:
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt1.nwfiber.com Host ID : 1 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f562054b local_conf_timestamp : 7368 Host timestamp : 7367 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7367 (Wed May 30 00:05:47 2018) host-id=1 score=3400 vm_conf_refresh_time=7368 (Wed May 30 00:05:48 2018) conf_on_shared_storage=True maintenance=False state=EngineStart stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2.nwfiber.com Host ID : 2 Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"} Score : 0 stopped : False Local maintenance : False crc32 : 6822f86a local_conf_timestamp : 3610 Host timestamp : 3610 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3610 (Wed May 30 00:04:28 2018) host-id=2 score=0 vm_conf_refresh_time=3610 (Wed May 30 00:04:28 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Wed Dec 31 17:09:30 1969
--== Host 3 status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt3.nwfiber.com Host ID : 3 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 006512a3 local_conf_timestamp : 99294 Host timestamp : 99294 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=99294 (Wed May 30 00:04:36 2018) host-id=3 score=3400 vm_conf_refresh_time=99294 (Wed May 30 00:04:36 2018) conf_on_shared_storage=True maintenance=False state=EngineStarting stopped=False ---------------
I saw that the ha score was 0 on ovirt2 (where I was focusing my efforts), and ovirt3 had a good ha score, I tried to launch it there:
[root@ovirt3 ff3d0185-d120-40ee-9792-ebbd5caaca8b]# hosted-engine --vm-start Command VM.getStats with args {'vmID': '50392390-4f89-435c-bf3b-8254b58a4ef7'} failed: (code=1, message=Virtual machine does not exist: {'vmId': u'50392390-4f89-435c-bf3b-8254b58a4ef7'})
ovirt1 was returning the ha-agent or storage not online message, but now its hanging (probably 2 minutes), so it may actually return status this time. I figure if it can't return status, then there's no point in trying to start the VM there.
On Wed, May 30, 2018 at 12:08 AM, Jim Kusznir <jim@palousetech.com> wrote:
The agent..log was sent minutes ago in another message. Here's the gluster client log for the engine volume:
[2018-05-30 06:31:12.343709] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155 [2018-05-30 06:31:12.348210] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-engine-replicate-0: expunging file 117c5234-e6d6-4f03-809b-a4ef6a6fac4d/hosted-engine.metadata (38a277cd-113d-4ca8-8f48-fadc944d3272) on engine-client-1 [2018-05-30 06:31:12.360602] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3a430e50-48cb-463c-8657-aab5458ac155. sources=[0] 2 sinks=1 [2018-05-30 06:31:12.379169] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed entry selfheal on 117c5234-e6d6-4f03-809b-a4ef6a6fac4d. sources=[0] 2 sinks=1 [2018-05-30 06:32:54.756729] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f [2018-05-30 06:32:54.763175] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:37:06.581538] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 06c557a9-b550-4e25-b190-804e50cea65b. sources=[0] 2 sinks=1 [2018-05-30 06:39:55.739302] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:44:04.911825] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 06:58:05.123854] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801 [2018-05-30 06:58:05.131929] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 03ca7a59-e8ee-4a70-bc68-47e972aeb801. sources=[0] 2 sinks=1 [2018-05-30 07:01:42.583991] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1 [2018-05-30 07:05:48.777982] I [MSGID: 108026] [afr-self-heal-common.c:1656:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on 2ecf0f33-1385-416e-8619-129bdb62b61f. sources=[0] 2 sinks=1
-------- broker log: -------- Thread-1::INFO::2018-05-30 00:07:13,241::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:07:14,344::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:19,356::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:07:22,081::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Listener::INFO::2018-05-30 00:07:24,369::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:29,382::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:34,393::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:39,406::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:44,419::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:49,431::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:54,444::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:07:59,452::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-2::INFO::2018-05-30 00:08:03,736::mgmt_bridge::62: :mgmt_bridge.MgmtBridge::(action) Found bridge ovirtmgmt with ports Listener::INFO::2018-05-30 00:08:04,460::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:09,468::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::116::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) VM stats do not contain cpu usage. VM might be down. Thread-4::INFO::2018-05-30 00:08:11,461::cpu_load_no_engi ne::126::cpu_load_no_engine.CpuLoadNoEngine::(calculate_load) System load total=0.0169, engine=0.0000, non-engine=0.0169 Listener::INFO::2018-05-30 00:08:14,481::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Listener::INFO::2018-05-30 00:08:19,490::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING Thread-5::INFO::2018-05-30 00:08:22,175::engine_health::2 02::engine_health.EngineHealth::(_result_from_stats) VM not running on this host, status Down Thread-1::INFO::2018-05-30 00:08:22,436::ping::60::ping.Ping::(action) Successfully pinged 192.168.8.1 Listener::INFO::2018-05-30 00:08:24,503::storage_broker::304:: ovirt_hosted_engine_ha.broker.storage_broker.StorageBro ker::(_get_domain_monitor_status) VDSM domain monitor status: PENDING
On Wed, May 30, 2018 at 12:03 AM, Sahina Bose <sabose@redhat.com> wrote:
> [Adding gluster-users back] > > Nothing amiss with volume info and status. > Can you check the agent.log and broker.log - will be under > /var/log/ovirt-hosted-engine-ha/ > > Also the gluster client logs - under /var/log/glusterfs/rhev-data-c > enter-mnt-glusterSD<volume>.log > > > > On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim@palousetech.com> > wrote: > >> I believe the gluster data store for the engine is up and working >> correctly. The rest are not mounted, as the engine hasn't managed to start >> correctly yet. I did perform the copy-a-junk-file into the data store, >> then check to ensure its there on another host, then delete that and see >> that it disappeared on the first host; it passed that test. Here's the >> info and status. (I have NOT performed the steps that Krutika and >> Ravishankar suggested yet, as I don't have my data volumes working again >> yet. >> >> [root@ovirt2 images]# gluster volume info >> >> Volume Name: data >> Type: Replicate >> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x (2 + 1) = 3 >> Transport-type: tcp >> Bricks: >> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data >> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data >> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) >> Options Reconfigured: >> diagnostics.count-fop-hits: on >> diagnostics.latency-measurement: on >> changelog.changelog: on >> geo-replication.ignore-pid-check: on >> geo-replication.indexing: on >> server.allow-insecure: on >> performance.readdir-ahead: on >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> cluster.eager-lock: enable >> network.remote-dio: enable >> cluster.quorum-type: auto >> cluster.server-quorum-type: server >> storage.owner-uid: 36 >> storage.owner-gid: 36 >> features.shard: on >> features.shard-block-size: 512MB >> performance.low-prio-threads: 32 >> cluster.data-self-heal-algorithm: full >> cluster.locking-scheme: granular >> cluster.shd-wait-qlength: 10000 >> cluster.shd-max-threads: 8 >> network.ping-timeout: 30 >> user.cifs: off >> nfs.disable: on >> performance.strict-o-direct: on >> >> Volume Name: data-hdd >> Type: Replicate >> Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x 3 = 3 >> Transport-type: tcp >> Bricks: >> Brick1: 172.172.1.11:/gluster/brick3/data-hdd >> Brick2: 172.172.1.12:/gluster/brick3/data-hdd >> Brick3: 172.172.1.13:/gluster/brick3/data-hdd >> Options Reconfigured: >> diagnostics.count-fop-hits: on >> diagnostics.latency-measurement: on >> network.ping-timeout: 30 >> server.allow-insecure: on >> storage.owner-gid: 36 >> storage.owner-uid: 36 >> user.cifs: off >> features.shard: on >> cluster.shd-wait-qlength: 10000 >> cluster.shd-max-threads: 8 >> cluster.locking-scheme: granular >> cluster.data-self-heal-algorithm: full >> cluster.server-quorum-type: server >> cluster.quorum-type: auto >> cluster.eager-lock: enable >> network.remote-dio: enable >> performance.low-prio-threads: 32 >> performance.stat-prefetch: off >> performance.io-cache: off >> performance.read-ahead: off >> performance.quick-read: off >> changelog.changelog: on >> geo-replication.ignore-pid-check: on >> geo-replication.indexing: on >> transport.address-family: inet >> performance.readdir-ahead: on >> >> Volume Name: engine >> Type: Replicate >> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x (2 + 1) = 3 >> Transport-type: tcp >> Bricks: >> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine >> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine >> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) >> Options Reconfigured: >> changelog.changelog: on >> geo-replication.ignore-pid-check: on >> geo-replication.indexing: on >> performance.readdir-ahead: on >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> cluster.eager-lock: enable >> network.remote-dio: off >> cluster.quorum-type: auto >> cluster.server-quorum-type: server >> storage.owner-uid: 36 >> storage.owner-gid: 36 >> features.shard: on >> features.shard-block-size: 512MB >> performance.low-prio-threads: 32 >> cluster.data-self-heal-algorithm: full >> cluster.locking-scheme: granular >> cluster.shd-wait-qlength: 10000 >> cluster.shd-max-threads: 6 >> network.ping-timeout: 30 >> user.cifs: off >> nfs.disable: on >> performance.strict-o-direct: on >> >> Volume Name: iso >> Type: Replicate >> Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x (2 + 1) = 3 >> Transport-type: tcp >> Bricks: >> Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso >> Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso >> Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) >> Options Reconfigured: >> performance.strict-o-direct: on >> nfs.disable: on >> user.cifs: off >> network.ping-timeout: 30 >> cluster.shd-max-threads: 6 >> cluster.shd-wait-qlength: 10000 >> cluster.locking-scheme: granular >> cluster.data-self-heal-algorithm: full >> performance.low-prio-threads: 32 >> features.shard-block-size: 512MB >> features.shard: on >> storage.owner-gid: 36 >> storage.owner-uid: 36 >> cluster.server-quorum-type: server >> cluster.quorum-type: auto >> network.remote-dio: off >> cluster.eager-lock: enable >> performance.stat-prefetch: off >> performance.io-cache: off >> performance.read-ahead: off >> performance.quick-read: off >> performance.readdir-ahead: on >> >> >> [root@ovirt2 images]# gluster volume status >> Status of volume: data >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick ovirt1.nwfiber.com:/gluster/brick2/da >> ta 49152 0 Y >> 3226 >> Brick ovirt2.nwfiber.com:/gluster/brick2/da >> ta 49152 0 Y >> 2967 >> Brick ovirt3.nwfiber.com:/gluster/brick2/da >> ta 49152 0 Y >> 2554 >> Self-heal Daemon on localhost N/A N/A Y >> 4818 >> Self-heal Daemon on ovirt3.nwfiber.com N/A N/A >> Y 17853 >> Self-heal Daemon on ovirt1.nwfiber.com N/A N/A >> Y 4771 >> >> Task Status of Volume data >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> Status of volume: data-hdd >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick 172.172.1.11:/gluster/brick3/data-hdd 49153 0 >> Y 3232 >> Brick 172.172.1.12:/gluster/brick3/data-hdd 49153 0 >> Y 2976 >> Brick 172.172.1.13:/gluster/brick3/data-hdd N/A N/A >> N N/A >> NFS Server on localhost N/A N/A N >> N/A >> Self-heal Daemon on localhost N/A N/A Y >> 4818 >> NFS Server on ovirt3.nwfiber.com N/A N/A >> N N/A >> Self-heal Daemon on ovirt3.nwfiber.com N/A N/A >> Y 17853 >> NFS Server on ovirt1.nwfiber.com N/A N/A >> N N/A >> Self-heal Daemon on ovirt1.nwfiber.com N/A N/A >> Y 4771 >> >> Task Status of Volume data-hdd >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> Status of volume: engine >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick ovirt1.nwfiber.com:/gluster/brick1/en >> gine 49154 0 Y >> 3239 >> Brick ovirt2.nwfiber.com:/gluster/brick1/en >> gine 49154 0 Y >> 2982 >> Brick ovirt3.nwfiber.com:/gluster/brick1/en >> gine 49154 0 Y >> 2578 >> Self-heal Daemon on localhost N/A N/A Y >> 4818 >> Self-heal Daemon on ovirt3.nwfiber.com N/A N/A >> Y 17853 >> Self-heal Daemon on ovirt1.nwfiber.com N/A N/A >> Y 4771 >> >> Task Status of Volume engine >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> Status of volume: iso >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick ovirt1.nwfiber.com:/gluster/brick4/is >> o 49155 0 Y >> 3247 >> Brick ovirt2.nwfiber.com:/gluster/brick4/is >> o 49155 0 Y >> 2990 >> Brick ovirt3.nwfiber.com:/gluster/brick4/is >> o 49155 0 Y >> 2580 >> Self-heal Daemon on localhost N/A N/A Y >> 4818 >> Self-heal Daemon on ovirt3.nwfiber.com N/A N/A >> Y 17853 >> Self-heal Daemon on ovirt1.nwfiber.com N/A N/A >> Y 4771 >> >> Task Status of Volume iso >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> >> On Tue, May 29, 2018 at 11:30 PM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> >>> >>> On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir <jim@palousetech.com >>> > wrote: >>> >>>> hosted-engine --deploy failed (would not come up on my existing >>>> gluster storage). However, I realized no changes were written to my >>>> existing storage. So, I went back to trying to get my old engine running. >>>> >>>> hosted-engine --vm-status is now taking a very long time >>>> (5+minutes) to return, and it returns stail information everywhere. I >>>> thought perhaps the lockspace is corrupt, so tried to clean that and >>>> metadata, but both are failing (--cleam-metadata has hung and I can't even >>>> ctrl-c out of it). >>>> >>>> How can I reinitialize all the lockspace/metadata safely? There >>>> is no engine or VMs running currently.... >>>> >>> >>> I think the first thing to make sure is that your storage is up >>> and running. So can you mount the gluster volumes and able to access the >>> contents there? >>> Please provide the gluster volume info and gluster volume status >>> of the volumes that you're using. >>> >>> >>> >>>> --Jim >>>> >>>> On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com >>>> > wrote: >>>> >>>>> Well, things went from bad to very, very bad.... >>>>> >>>>> It appears that during one of the 2 minute lockups, the fencing >>>>> agents decided that another node in the cluster was down. As a result, 2 >>>>> of the 3 nodes were simultaneously reset with fencing agent reboot. After >>>>> the nodes came back up, the engine would not start. All running VMs >>>>> (including VMs on the 3rd node that was not rebooted) crashed. >>>>> >>>>> I've now been working for about 3 hours trying to get the engine >>>>> to come up. I don't know why it won't start. hosted-engine --vm-start >>>>> says its starting, but it doesn't start (virsh doesn't show any VMs >>>>> running). I'm currently running --deploy, as I had run out of options for >>>>> anything else I can come up with. I hope this will allow me to re-import >>>>> all my existing VMs and allow me to start them back up after everything >>>>> comes back up. >>>>> >>>>> I do have an unverified geo-rep backup; I don't know if it is a >>>>> good backup (there were several prior messages to this list, but I didn't >>>>> get replies to my questions. It was running in what I believe to be >>>>> "strange", and the data directories are larger than their source). >>>>> >>>>> I'll see if my --deploy works, and if not, I'll be back with >>>>> another message/help request. >>>>> >>>>> When the dust settles and I'm at least minimally functional >>>>> again, I really want to understand why all these technologies designed to >>>>> offer redundancy conspired to reduce uptime and create failures where there >>>>> weren't any otherwise. I thought with hosted engine, 3 ovirt servers and >>>>> glusterfs with minimum replica 2+arb or replica 3 should have offered >>>>> strong resilience against server failure or disk failure, and should have >>>>> prevented / recovered from data corruption. Instead, all of the above >>>>> happened (once I get my cluster back up, I still have to try and recover my >>>>> webserver VM, which won't boot due to XFS corrupt journal issues created >>>>> during the gluster crashes). I think a lot of these issues were rooted >>>>> from the upgrade from 4.1 to 4.2. >>>>> >>>>> --Jim >>>>> >>>>> On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir < >>>>> jim@palousetech.com> wrote: >>>>> >>>>>> I also finally found the following in my system log on one >>>>>> server: >>>>>> >>>>>> [10679.524491] INFO: task glusterclogro:14933 blocked for more >>>>>> than 120 seconds. >>>>>> [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [10679.527144] glusterclogro D ffff97209832bf40 0 14933 >>>>>> 1 0x00000080 >>>>>> [10679.527150] Call Trace: >>>>>> [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [10679.527218] [<ffffffffc060e388>] >>>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>>> [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>>> [xfs] >>>>>> [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [10679.527268] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>>> [10679.527275] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [10679.527279] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [10679.527283] INFO: task glusterposixfsy:14941 blocked for >>>>>> more than 120 seconds. >>>>>> [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 >>>>>> 1 0x00000080 >>>>>> [10679.529961] Call Trace: >>>>>> [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [10679.530003] [<ffffffffc060e388>] >>>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>>> [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>>> [xfs] >>>>>> [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [10679.530046] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>>> [10679.530054] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [10679.530058] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [10679.530062] INFO: task glusteriotwr13:15486 blocked for more >>>>>> than 120 seconds. >>>>>> [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 >>>>>> 1 0x00000080 >>>>>> [10679.533738] Call Trace: >>>>>> [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [10679.533799] [<ffffffffc060e388>] >>>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>>> [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>>> [xfs] >>>>>> [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [10679.533858] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>>> [10679.533868] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [10679.533873] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [10919.512757] INFO: task glusterclogro:14933 blocked for more >>>>>> than 120 seconds. >>>>>> [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [10919.516663] glusterclogro D ffff97209832bf40 0 14933 >>>>>> 1 0x00000080 >>>>>> [10919.516677] Call Trace: >>>>>> [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [10919.516696] [<ffffffffb99118e9>] >>>>>> schedule_timeout+0x239/0x2c0 >>>>>> [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 >>>>>> [10919.516768] [<ffffffffc05e9224>] ? >>>>>> _xfs_buf_ioapply+0x334/0x460 [xfs] >>>>>> [10919.516774] [<ffffffffb991432d>] >>>>>> wait_for_completion+0xfd/0x140 >>>>>> [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 >>>>>> [xfs] >>>>>> [10919.516859] [<ffffffffc05eafa9>] >>>>>> xfs_buf_submit_wait+0xf9/0x1d0 [xfs] >>>>>> [10919.516902] [<ffffffffc061b279>] ? >>>>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>>>> [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 >>>>>> [xfs] >>>>>> [10919.516977] [<ffffffffc05eb1b9>] >>>>>> xfs_buf_read_map+0xf9/0x160 [xfs] >>>>>> [10919.517022] [<ffffffffc061b279>] >>>>>> xfs_trans_read_buf_map+0x199/0x400 [xfs] >>>>>> [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 >>>>>> [xfs] >>>>>> [10919.517091] [<ffffffffc05c8d53>] >>>>>> xfs_da3_node_read+0x23/0xd0 [xfs] >>>>>> [10919.517126] [<ffffffffc05c9fee>] >>>>>> xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] >>>>>> [10919.517160] [<ffffffffc05d5a1d>] >>>>>> xfs_dir2_node_lookup+0x4d/0x170 [xfs] >>>>>> [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 >>>>>> [xfs] >>>>>> [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] >>>>>> [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 >>>>>> [xfs] >>>>>> [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 >>>>>> [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 >>>>>> [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 >>>>>> [10919.517296] [<ffffffffb94d3753>] ? >>>>>> selinux_file_free_security+0x23/0x30 >>>>>> [10919.517304] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [10919.517309] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [10919.517313] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [10919.517318] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 >>>>>> [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 >>>>>> [10919.517333] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [10919.517339] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [11159.496095] INFO: task glusteriotwr9:15482 blocked for more >>>>>> than 120 seconds. >>>>>> [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 >>>>>> 1 0x00000080 >>>>>> [11159.498984] Call Trace: >>>>>> [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>>>> [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11159.499003] [<ffffffffb99118e9>] >>>>>> schedule_timeout+0x239/0x2c0 >>>>>> [11159.499056] [<ffffffffc05dd9b7>] ? >>>>>> xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] >>>>>> [11159.499082] [<ffffffffc05dd43e>] ? >>>>>> xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] >>>>>> [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 >>>>>> [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 >>>>>> [11159.499097] [<ffffffffb991348d>] >>>>>> io_schedule_timeout+0xad/0x130 >>>>>> [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 >>>>>> [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 >>>>>> [11159.499107] [<ffffffffb9911ac1>] >>>>>> __wait_on_bit_lock+0x61/0xc0 >>>>>> [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 >>>>>> [11159.499118] [<ffffffffb92bc210>] ? >>>>>> wake_bit_function+0x40/0x40 >>>>>> [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 >>>>>> [11159.499125] [<ffffffffb9394e85>] >>>>>> grab_cache_page_write_begin+0x55/0xc0 >>>>>> [11159.499130] [<ffffffffb9484b76>] >>>>>> iomap_write_begin+0x66/0x100 >>>>>> [11159.499135] [<ffffffffb9484edf>] >>>>>> iomap_write_actor+0xcf/0x1d0 >>>>>> [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>>>> [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 >>>>>> [11159.499149] [<ffffffffb9485621>] >>>>>> iomap_file_buffered_write+0xa1/0xe0 >>>>>> [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 >>>>>> [11159.499182] [<ffffffffc05f025d>] >>>>>> xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] >>>>>> [11159.499213] [<ffffffffc05f057d>] >>>>>> xfs_file_aio_write+0x18d/0x1b0 [xfs] >>>>>> [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>>>> [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>>>> [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>>>> [11159.499230] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [11159.499234] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [11159.499238] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more >>>>>> than 120 seconds. >>>>>> [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 >>>>>> 2 0x00000000 >>>>>> [11279.491671] Call Trace: >>>>>> [11279.491682] [<ffffffffb92a3a2e>] ? >>>>>> try_to_del_timer_sync+0x5e/0x90 >>>>>> [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 >>>>>> [xfs] >>>>>> [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] >>>>>> [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 >>>>>> [xfs] >>>>>> [11279.491849] [<ffffffffc0619e80>] ? >>>>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>>>> [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] >>>>>> [11279.491913] [<ffffffffc0619e80>] ? >>>>>> xfs_trans_ail_cursor_first+0x90/0x90 [xfs] >>>>>> [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 >>>>>> [11279.491926] [<ffffffffb92bb090>] ? >>>>>> insert_kthread_work+0x40/0x40 >>>>>> [11279.491932] [<ffffffffb9920677>] >>>>>> ret_from_fork_nospec_begin+0x21/0x21 >>>>>> [11279.491936] [<ffffffffb92bb090>] ? >>>>>> insert_kthread_work+0x40/0x40 >>>>>> [11279.491976] INFO: task glusterclogfsyn:14934 blocked for >>>>>> more than 120 seconds. >>>>>> [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 >>>>>> 1 0x00000080 >>>>>> [11279.494957] Call Trace: >>>>>> [11279.494979] [<ffffffffc0309839>] ? >>>>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>>>> [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11279.494987] [<ffffffffb99118e9>] >>>>>> schedule_timeout+0x239/0x2c0 >>>>>> [11279.494997] [<ffffffffc0309d98>] ? >>>>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>>>> [11279.495001] [<ffffffffb991348d>] >>>>>> io_schedule_timeout+0xad/0x130 >>>>>> [11279.495005] [<ffffffffb99145ad>] >>>>>> wait_for_completion_io+0xfd/0x140 >>>>>> [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [11279.495016] [<ffffffffb951e574>] >>>>>> blkdev_issue_flush+0xb4/0x110 >>>>>> [11279.495049] [<ffffffffc06064b9>] >>>>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>>>> [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>>>> [xfs] >>>>>> [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [11279.495090] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>>> [11279.495098] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [11279.495102] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [11279.495105] INFO: task glusterposixfsy:14941 blocked for >>>>>> more than 120 seconds. >>>>>> [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 >>>>>> 1 0x00000080 >>>>>> [11279.498118] Call Trace: >>>>>> [11279.498134] [<ffffffffc0309839>] ? >>>>>> __split_and_process_bio+0x2e9/0x520 [dm_mod] >>>>>> [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11279.498142] [<ffffffffb99118e9>] >>>>>> schedule_timeout+0x239/0x2c0 >>>>>> [11279.498152] [<ffffffffc0309d98>] ? >>>>>> dm_make_request+0x128/0x1a0 [dm_mod] >>>>>> [11279.498156] [<ffffffffb991348d>] >>>>>> io_schedule_timeout+0xad/0x130 >>>>>> [11279.498160] [<ffffffffb99145ad>] >>>>>> wait_for_completion_io+0xfd/0x140 >>>>>> [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [11279.498169] [<ffffffffb951e574>] >>>>>> blkdev_issue_flush+0xb4/0x110 >>>>>> [11279.498202] [<ffffffffc06064b9>] >>>>>> xfs_blkdev_issue_flush+0x19/0x20 [xfs] >>>>>> [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 >>>>>> [xfs] >>>>>> [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [11279.498242] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 >>>>>> [11279.498250] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [11279.498254] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [11279.498257] INFO: task glusteriotwr1:14950 blocked for more >>>>>> than 120 seconds. >>>>>> [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 >>>>>> 1 0x00000080 >>>>>> [11279.501348] Call Trace: >>>>>> [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11279.501390] [<ffffffffc060e388>] >>>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>>> [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>>> [xfs] >>>>>> [11279.501432] [<ffffffffb944ef3f>] >>>>>> generic_write_sync+0x4f/0x70 >>>>>> [11279.501461] [<ffffffffc05f0545>] >>>>>> xfs_file_aio_write+0x155/0x1b0 [xfs] >>>>>> [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 >>>>>> [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 >>>>>> [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 >>>>>> [11279.501479] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [11279.501483] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [11279.501489] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [11279.501493] INFO: task glusteriotwr4:14953 blocked for more >>>>>> than 120 seconds. >>>>>> [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" >>>>>> disables this message. >>>>>> [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 >>>>>> 1 0x00000080 >>>>>> [11279.504635] Call Trace: >>>>>> [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 >>>>>> [11279.504676] [<ffffffffc060e388>] >>>>>> _xfs_log_force_lsn+0x2e8/0x340 [xfs] >>>>>> [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 >>>>>> [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 >>>>>> [xfs] >>>>>> [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 >>>>>> [11279.504718] [<ffffffffb992076f>] ? >>>>>> system_call_after_swapgs+0xbc/0x160 >>>>>> [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 >>>>>> [11279.504725] [<ffffffffb992082f>] >>>>>> system_call_fastpath+0x1c/0x21 >>>>>> [11279.504730] [<ffffffffb992077b>] ? >>>>>> system_call_after_swapgs+0xc8/0x160 >>>>>> [12127.466494] perf: interrupt took too long (8263 > 8150), >>>>>> lowering kernel.perf_event_max_sample_rate to 24000 >>>>>> >>>>>> -------------------- >>>>>> I think this is the cause of the massive ovirt performance >>>>>> issues irrespective of gluster volume. At the time this happened, I was >>>>>> also ssh'ed into the host, and was doing some rpm querry commands. I had >>>>>> just run rpm -qa |grep glusterfs (to verify what version was actually >>>>>> installed), and that command took almost 2 minutes to return! Normally it >>>>>> takes less than 2 seconds. That is all pure local SSD IO, too.... >>>>>> >>>>>> I'm no expert, but its my understanding that anytime a software >>>>>> causes these kinds of issues, its a serious bug in the software, even if >>>>>> its mis-handled exceptions. Is this correct? >>>>>> >>>>>> --Jim >>>>>> >>>>>> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> I think this is the profile information for one of the volumes >>>>>>> that lives on the SSDs and is fully operational with no down/problem disks: >>>>>>> >>>>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data info >>>>>>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data >>>>>>> ---------------------------------------------- >>>>>>> Cumulative Stats: >>>>>>> Block Size: 256b+ 512b+ >>>>>>> 1024b+ >>>>>>> No. of Reads: 983 2696 >>>>>>> 1059 >>>>>>> No. of Writes: 0 1113 >>>>>>> 302 >>>>>>> >>>>>>> Block Size: 2048b+ 4096b+ >>>>>>> 8192b+ >>>>>>> No. of Reads: 852 88608 >>>>>>> 53526 >>>>>>> No. of Writes: 522 812340 >>>>>>> 76257 >>>>>>> >>>>>>> Block Size: 16384b+ 32768b+ >>>>>>> 65536b+ >>>>>>> No. of Reads: 54351 241901 >>>>>>> 15024 >>>>>>> No. of Writes: 21636 8656 >>>>>>> 8976 >>>>>>> >>>>>>> Block Size: 131072b+ >>>>>>> No. of Reads: 524156 >>>>>>> No. of Writes: 296071 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 4189 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 1257 RELEASEDIR >>>>>>> 0.00 46.19 us 12.00 us 187.00 us >>>>>>> 69 FLUSH >>>>>>> 0.00 147.00 us 78.00 us 367.00 us >>>>>>> 86 REMOVEXATTR >>>>>>> 0.00 223.46 us 24.00 us 1166.00 us >>>>>>> 149 READDIR >>>>>>> 0.00 565.34 us 76.00 us 3639.00 us >>>>>>> 88 FTRUNCATE >>>>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>>>> 228 LK >>>>>>> 0.00 98.84 us 2.00 us 880.00 us >>>>>>> 1198 OPENDIR >>>>>>> 0.00 91.59 us 26.00 us 10371.00 us >>>>>>> 3853 STATFS >>>>>>> 0.00 494.14 us 17.00 us 193439.00 us >>>>>>> 1171 GETXATTR >>>>>>> 0.00 299.42 us 35.00 us 9799.00 us >>>>>>> 2044 READDIRP >>>>>>> 0.00 1965.31 us 110.00 us 382258.00 us >>>>>>> 321 XATTROP >>>>>>> 0.01 113.40 us 24.00 us 61061.00 us >>>>>>> 8134 STAT >>>>>>> 0.01 755.38 us 57.00 us 607603.00 us >>>>>>> 3196 DISCARD >>>>>>> 0.05 2690.09 us 58.00 us 2704761.00 us >>>>>>> 3206 OPEN >>>>>>> 0.10 119978.25 us 97.00 us 9406684.00 us >>>>>>> 154 SETATTR >>>>>>> 0.18 101.73 us 28.00 us 700477.00 us >>>>>>> 313379 FSTAT >>>>>>> 0.23 1059.84 us 25.00 us 2716124.00 us >>>>>>> 38255 LOOKUP >>>>>>> 0.47 1024.11 us 54.00 us 6197164.00 us >>>>>>> 81455 FXATTROP >>>>>>> 1.72 2984.00 us 15.00 us 37098954.00 us >>>>>>> 103020 FINODELK >>>>>>> 5.92 44315.32 us 51.00 us 24731536.00 us >>>>>>> 23957 FSYNC >>>>>>> 13.27 2399.78 us 25.00 us 22089540.00 us >>>>>>> 991005 READ >>>>>>> 37.00 5980.43 us 52.00 us 22099889.00 us >>>>>>> 1108976 WRITE >>>>>>> 41.04 5452.75 us 13.00 us 22102452.00 us >>>>>>> 1349053 INODELK >>>>>>> >>>>>>> Duration: 10026 seconds >>>>>>> Data Read: 80046027759 bytes >>>>>>> Data Written: 44496632320 bytes >>>>>>> >>>>>>> Interval 1 Stats: >>>>>>> Block Size: 256b+ 512b+ >>>>>>> 1024b+ >>>>>>> No. of Reads: 983 2696 >>>>>>> 1059 >>>>>>> No. of Writes: 0 838 >>>>>>> 185 >>>>>>> >>>>>>> Block Size: 2048b+ 4096b+ >>>>>>> 8192b+ >>>>>>> No. of Reads: 852 85856 >>>>>>> 51575 >>>>>>> No. of Writes: 382 705802 >>>>>>> 57812 >>>>>>> >>>>>>> Block Size: 16384b+ 32768b+ >>>>>>> 65536b+ >>>>>>> No. of Reads: 52673 232093 >>>>>>> 14984 >>>>>>> No. of Writes: 13499 4908 >>>>>>> 4242 >>>>>>> >>>>>>> Block Size: 131072b+ >>>>>>> No. of Reads: 460040 >>>>>>> No. of Writes: 6411 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 2093 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 1093 RELEASEDIR >>>>>>> 0.00 53.38 us 26.00 us 111.00 us >>>>>>> 16 FLUSH >>>>>>> 0.00 145.14 us 78.00 us 367.00 us >>>>>>> 71 REMOVEXATTR >>>>>>> 0.00 190.96 us 114.00 us 298.00 us >>>>>>> 71 SETATTR >>>>>>> 0.00 213.38 us 24.00 us 1145.00 us >>>>>>> 90 READDIR >>>>>>> 0.00 263.28 us 20.00 us 28385.00 us >>>>>>> 228 LK >>>>>>> 0.00 101.76 us 2.00 us 880.00 us >>>>>>> 1093 OPENDIR >>>>>>> 0.01 93.60 us 27.00 us 10371.00 us >>>>>>> 3090 STATFS >>>>>>> 0.02 537.47 us 17.00 us 193439.00 us >>>>>>> 1038 GETXATTR >>>>>>> 0.03 297.44 us 35.00 us 9799.00 us >>>>>>> 1990 READDIRP >>>>>>> 0.03 2357.28 us 110.00 us 382258.00 us >>>>>>> 253 XATTROP >>>>>>> 0.04 385.93 us 58.00 us 47593.00 us >>>>>>> 2091 OPEN >>>>>>> 0.04 114.86 us 24.00 us 61061.00 us >>>>>>> 7715 STAT >>>>>>> 0.06 444.59 us 57.00 us 333240.00 us >>>>>>> 3053 DISCARD >>>>>>> 0.42 316.24 us 25.00 us 290728.00 us >>>>>>> 29823 LOOKUP >>>>>>> 0.73 257.92 us 54.00 us 344812.00 us >>>>>>> 63296 FXATTROP >>>>>>> 1.37 98.30 us 28.00 us 67621.00 us >>>>>>> 313172 FSTAT >>>>>>> 1.58 2124.69 us 51.00 us 849200.00 us >>>>>>> 16717 FSYNC >>>>>>> 5.73 162.46 us 52.00 us 748492.00 us >>>>>>> 794079 WRITE >>>>>>> 7.19 2065.17 us 16.00 us 37098954.00 us >>>>>>> 78381 FINODELK >>>>>>> 36.44 886.32 us 25.00 us 2216436.00 us >>>>>>> 925421 READ >>>>>>> 46.30 1178.04 us 13.00 us 1700704.00 us >>>>>>> 884635 INODELK >>>>>>> >>>>>>> Duration: 7485 seconds >>>>>>> Data Read: 71250527215 bytes >>>>>>> Data Written: 5119903744 bytes >>>>>>> >>>>>>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data >>>>>>> ---------------------------------------------- >>>>>>> Cumulative Stats: >>>>>>> Block Size: 1b+ >>>>>>> No. of Reads: 0 >>>>>>> No. of Writes: 3264419 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 90 FORGET >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 9462 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 4254 RELEASEDIR >>>>>>> 0.00 50.52 us 13.00 us 190.00 us >>>>>>> 71 FLUSH >>>>>>> 0.00 186.97 us 87.00 us 713.00 us >>>>>>> 86 REMOVEXATTR >>>>>>> 0.00 79.32 us 33.00 us 189.00 us >>>>>>> 228 LK >>>>>>> 0.00 220.98 us 129.00 us 513.00 us >>>>>>> 86 SETATTR >>>>>>> 0.01 259.30 us 26.00 us 2632.00 us >>>>>>> 137 READDIR >>>>>>> 0.02 322.76 us 145.00 us 2125.00 us >>>>>>> 321 XATTROP >>>>>>> 0.03 109.55 us 2.00 us 1258.00 us >>>>>>> 1193 OPENDIR >>>>>>> 0.05 70.21 us 21.00 us 431.00 us >>>>>>> 3196 DISCARD >>>>>>> 0.05 169.26 us 21.00 us 2315.00 us >>>>>>> 1545 GETXATTR >>>>>>> 0.12 176.85 us 63.00 us 2844.00 us >>>>>>> 3206 OPEN >>>>>>> 0.61 303.49 us 90.00 us 3085.00 us >>>>>>> 9633 FSTAT >>>>>>> 2.44 305.66 us 28.00 us 3716.00 us >>>>>>> 38230 LOOKUP >>>>>>> 4.52 266.22 us 55.00 us 53424.00 us >>>>>>> 81455 FXATTROP >>>>>>> 6.96 1397.99 us 51.00 us 64822.00 us >>>>>>> 23889 FSYNC >>>>>>> 16.48 84.74 us 25.00 us 6917.00 us >>>>>>> 932592 WRITE >>>>>>> 30.16 106.90 us 13.00 us 3920189.00 us >>>>>>> 1353046 INODELK >>>>>>> 38.55 1794.52 us 14.00 us 16210553.00 us >>>>>>> 103039 FINODELK >>>>>>> >>>>>>> Duration: 66562 seconds >>>>>>> Data Read: 0 bytes >>>>>>> Data Written: 3264419 bytes >>>>>>> >>>>>>> Interval 1 Stats: >>>>>>> Block Size: 1b+ >>>>>>> No. of Reads: 0 >>>>>>> No. of Writes: 794080 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 2093 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 1093 RELEASEDIR >>>>>>> 0.00 70.31 us 26.00 us 125.00 us >>>>>>> 16 FLUSH >>>>>>> 0.00 193.10 us 103.00 us 713.00 us >>>>>>> 71 REMOVEXATTR >>>>>>> 0.01 227.32 us 133.00 us 513.00 us >>>>>>> 71 SETATTR >>>>>>> 0.01 79.32 us 33.00 us 189.00 us >>>>>>> 228 LK >>>>>>> 0.01 259.83 us 35.00 us 1138.00 us >>>>>>> 89 READDIR >>>>>>> 0.03 318.26 us 145.00 us 2047.00 us >>>>>>> 253 XATTROP >>>>>>> 0.04 112.67 us 3.00 us 1258.00 us >>>>>>> 1093 OPENDIR >>>>>>> 0.06 167.98 us 23.00 us 1951.00 us >>>>>>> 1014 GETXATTR >>>>>>> 0.08 70.97 us 22.00 us 431.00 us >>>>>>> 3053 DISCARD >>>>>>> 0.13 183.78 us 66.00 us 2844.00 us >>>>>>> 2091 OPEN >>>>>>> 1.01 303.82 us 90.00 us 3085.00 us >>>>>>> 9610 FSTAT >>>>>>> 3.27 316.59 us 30.00 us 3716.00 us >>>>>>> 29820 LOOKUP >>>>>>> 5.83 265.79 us 59.00 us 53424.00 us >>>>>>> 63296 FXATTROP >>>>>>> 7.95 1373.89 us 51.00 us 64822.00 us >>>>>>> 16717 FSYNC >>>>>>> 23.17 851.99 us 14.00 us 16210553.00 us >>>>>>> 78555 FINODELK >>>>>>> 24.04 87.44 us 27.00 us 6917.00 us >>>>>>> 794081 WRITE >>>>>>> 34.36 111.91 us 14.00 us 984871.00 us >>>>>>> 886790 INODELK >>>>>>> >>>>>>> Duration: 7485 seconds >>>>>>> Data Read: 0 bytes >>>>>>> Data Written: 794080 bytes >>>>>>> >>>>>>> >>>>>>> ----------------------- >>>>>>> Here is the data from the volume that is backed by the SHDDs >>>>>>> and has one failed disk: >>>>>>> [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd >>>>>>> info >>>>>>> Brick: 172.172.1.12:/gluster/brick3/data-hdd >>>>>>> -------------------------------------------- >>>>>>> Cumulative Stats: >>>>>>> Block Size: 256b+ 512b+ >>>>>>> 1024b+ >>>>>>> No. of Reads: 1702 86 >>>>>>> 16 >>>>>>> No. of Writes: 0 767 >>>>>>> 71 >>>>>>> >>>>>>> Block Size: 2048b+ 4096b+ >>>>>>> 8192b+ >>>>>>> No. of Reads: 19 51841 >>>>>>> 2049 >>>>>>> No. of Writes: 76 60668 >>>>>>> 35727 >>>>>>> >>>>>>> Block Size: 16384b+ 32768b+ >>>>>>> 65536b+ >>>>>>> No. of Reads: 1744 639 >>>>>>> 1088 >>>>>>> No. of Writes: 8524 2410 >>>>>>> 1285 >>>>>>> >>>>>>> Block Size: 131072b+ >>>>>>> No. of Reads: 771999 >>>>>>> No. of Writes: 29584 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 2902 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 1517 RELEASEDIR >>>>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>>>> 1 FTRUNCATE >>>>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>>>> 51 FLUSH >>>>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>>>> 57 REMOVEXATTR >>>>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>>>> 60 SETATTR >>>>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>>>> 555 LK >>>>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>>>> 138 READDIR >>>>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>>>> 237 XATTROP >>>>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>>>> 3469 STATFS >>>>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>>>> 1467 OPENDIR >>>>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>>>> 4454 STAT >>>>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>>>> 1902 GETXATTR >>>>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>>>> 1364 ENTRYLK >>>>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>>>> 1064 READDIRP >>>>>>> 0.07 85.74 us 19.00 us 68340.00 us >>>>>>> 62849 FSTAT >>>>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>>>> 2729 OPEN >>>>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>>>> 25447 LOOKUP >>>>>>> 5.94 2331.74 us 15.00 us 1099530.00 us >>>>>>> 207534 FINODELK >>>>>>> 7.31 8311.75 us 58.00 us 1800216.00 us >>>>>>> 71668 FXATTROP >>>>>>> 12.49 7735.19 us 51.00 us 3595513.00 us >>>>>>> 131642 WRITE >>>>>>> 17.70 957.08 us 16.00 us 13700466.00 us >>>>>>> 1508160 INODELK >>>>>>> 24.55 2546.43 us 26.00 us 5077347.00 us >>>>>>> 786060 READ >>>>>>> 31.56 49699.15 us 47.00 us 3746331.00 us >>>>>>> 51777 FSYNC >>>>>>> >>>>>>> Duration: 10101 seconds >>>>>>> Data Read: 101562897361 bytes >>>>>>> Data Written: 4834450432 bytes >>>>>>> >>>>>>> Interval 0 Stats: >>>>>>> Block Size: 256b+ 512b+ >>>>>>> 1024b+ >>>>>>> No. of Reads: 1702 86 >>>>>>> 16 >>>>>>> No. of Writes: 0 767 >>>>>>> 71 >>>>>>> >>>>>>> Block Size: 2048b+ 4096b+ >>>>>>> 8192b+ >>>>>>> No. of Reads: 19 51841 >>>>>>> 2049 >>>>>>> No. of Writes: 76 60668 >>>>>>> 35727 >>>>>>> >>>>>>> Block Size: 16384b+ 32768b+ >>>>>>> 65536b+ >>>>>>> No. of Reads: 1744 639 >>>>>>> 1088 >>>>>>> No. of Writes: 8524 2410 >>>>>>> 1285 >>>>>>> >>>>>>> Block Size: 131072b+ >>>>>>> No. of Reads: 771999 >>>>>>> No. of Writes: 29584 >>>>>>> %-latency Avg-latency Min-Latency Max-Latency No. of >>>>>>> calls Fop >>>>>>> --------- ----------- ----------- ----------- >>>>>>> ------------ ---- >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 2902 RELEASE >>>>>>> 0.00 0.00 us 0.00 us 0.00 us >>>>>>> 1517 RELEASEDIR >>>>>>> 0.00 197.00 us 197.00 us 197.00 us >>>>>>> 1 FTRUNCATE >>>>>>> 0.00 70.24 us 16.00 us 758.00 us >>>>>>> 51 FLUSH >>>>>>> 0.00 143.93 us 82.00 us 305.00 us >>>>>>> 57 REMOVEXATTR >>>>>>> 0.00 178.63 us 105.00 us 712.00 us >>>>>>> 60 SETATTR >>>>>>> 0.00 67.30 us 19.00 us 572.00 us >>>>>>> 555 LK >>>>>>> 0.00 322.80 us 23.00 us 4673.00 us >>>>>>> 138 READDIR >>>>>>> 0.00 336.56 us 106.00 us 11994.00 us >>>>>>> 237 XATTROP >>>>>>> 0.00 84.70 us 28.00 us 1071.00 us >>>>>>> 3469 STATFS >>>>>>> 0.01 387.75 us 2.00 us 146017.00 us >>>>>>> 1467 OPENDIR >>>>>>> 0.01 148.59 us 21.00 us 64374.00 us >>>>>>> 4454 STAT >>>>>>> 0.02 783.02 us 16.00 us 93502.00 us >>>>>>> 1902 GETXATTR >>>>>>> 0.03 1516.10 us 17.00 us 210690.00 us >>>>>>> 1364 ENTRYLK >>>>>>> 0.03 2555.47 us 300.00 us 674454.00 us >>>>>>> 1064 READDIRP >>>>>>> 0.07 85.73 us 19.00 us 68340.00 us >>>>>>> 62849 FSTAT >>>>>>> 0.07 1978.12 us 59.00 us 202596.00 us >>>>>>> 2729 OPEN >>>>>>> 0.22 708.57 us 15.00 us 394799.00 us >>>>>>> 25447 LOOKUP >>>>>>> 5.94 2334.57 us 15.00 us 1099530.00 us >>>>>>> 207534 FINODELK >>>>>>> 7.31 8311.49 us 58.00 us 1800216.00 us >>>>>>> 71668 FXATTROP >>>>>>> 12.49 7735.32 us 51.00 us 3595513.00 us >>>>>>> 131642 WRITE >>>>>>> 17.71 957.08 us 16.00 us 13700466.00 us >>>>>>> 1508160 INODELK >>>>>>> 24.56 2546.42 us 26.00 us 5077347.00 us >>>>>>> 786060 READ >>>>>>> 31.54 49651.63 us 47.00 us 3746331.00 us >>>>>>> 51777 FSYNC >>>>>>> >>>>>>> Duration: 10101 seconds >>>>>>> Data Read: 101562897361 bytes >>>>>>> Data Written: 4834450432 bytes >>>>>>> >>>>>>> >>>>>>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir < >>>>>>> jim@palousetech.com> wrote: >>>>>>> >>>>>>>> Thank you for your response. >>>>>>>> >>>>>>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. >>>>>>>> replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th >>>>>>>> volume is replica 3, with a brick on all three ovirt machines. >>>>>>>> >>>>>>>> The first 3 volumes are on an SSD disk; the 4th is on a >>>>>>>> Seagate SSHD (same in all three machines). On ovirt3, the SSHD has >>>>>>>> reported hard IO failures, and that brick is offline. However, the other >>>>>>>> two replicas are fully operational (although they still show contents in >>>>>>>> the heal info command that won't go away, but that may be the case until I >>>>>>>> replace the failed disk). >>>>>>>> >>>>>>>> What is bothering me is that ALL 4 gluster volumes are >>>>>>>> showing horrible performance issues. At this point, as the bad disk has >>>>>>>> been completely offlined, I would expect gluster to perform at normal >>>>>>>> speed, but that is definitely not the case. >>>>>>>> >>>>>>>> I've also noticed that the performance hits seem to come in >>>>>>>> waves: things seem to work acceptably (but slow) for a while, then >>>>>>>> suddenly, its as if all disk IO on all volumes (including non-gluster local >>>>>>>> OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes >>>>>>>> again. During those times, I start getting VM not responding and host not >>>>>>>> responding notices as well as the applications having major issues. >>>>>>>> >>>>>>>> I've shut down most of my VMs and am down to just my >>>>>>>> essential core VMs (shedded about 75% of my VMs). I still am experiencing >>>>>>>> the same issues. >>>>>>>> >>>>>>>> Am I correct in believing that once the failed disk was >>>>>>>> brought offline that performance should return to normal? >>>>>>>> >>>>>>>> On Tue, May 29, 2018 at 1:27 PM, Alex K < >>>>>>>> rightkicktech@gmail.com> wrote: >>>>>>>> >>>>>>>>> I would check disks status and accessibility of mount points >>>>>>>>> where your gluster volumes reside. >>>>>>>>> >>>>>>>>> On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> On one ovirt server, I'm now seeing these messages: >>>>>>>>>> [56474.239725] blk_update_request: 63 callbacks suppressed >>>>>>>>>> [56474.239732] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 0 >>>>>>>>>> [56474.240602] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905945472 >>>>>>>>>> [56474.241346] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905945584 >>>>>>>>>> [56474.242236] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 2048 >>>>>>>>>> [56474.243072] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905943424 >>>>>>>>>> [56474.243997] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905943536 >>>>>>>>>> [56474.247347] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 0 >>>>>>>>>> [56474.248315] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905945472 >>>>>>>>>> [56474.249231] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 3905945584 >>>>>>>>>> [56474.250221] blk_update_request: I/O error, dev dm-2, >>>>>>>>>> sector 2048 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir < >>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>> >>>>>>>>>>> I see in messages on ovirt3 (my 3rd machine, the one >>>>>>>>>>> upgraded to 4.2): >>>>>>>>>>> >>>>>>>>>>> May 29 11:54:41 ovirt3 ovs-vsctl: >>>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>>> May 29 11:54:51 ovirt3 ovs-vsctl: >>>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>>> May 29 11:55:01 ovirt3 ovs-vsctl: >>>>>>>>>>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>>>>>>>>>> database connection failed (No such file or directory) >>>>>>>>>>> (appears a lot). >>>>>>>>>>> >>>>>>>>>>> I also found on the ssh session of that, some sysv >>>>>>>>>>> warnings about the backing disk for one of the gluster volumes (straight >>>>>>>>>>> replica 3). The glusterfs process for that disk on that machine went >>>>>>>>>>> offline. Its my understanding that it should continue to work with the >>>>>>>>>>> other two machines while I attempt to replace that disk, right? Attempted >>>>>>>>>>> writes (touching an empty file) can take 15 seconds, repeating it later >>>>>>>>>>> will be much faster. >>>>>>>>>>> >>>>>>>>>>> Gluster generates a bunch of different log files, I don't >>>>>>>>>>> know what ones you want, or from which machine(s). >>>>>>>>>>> >>>>>>>>>>> How do I do "volume profiling"? >>>>>>>>>>> >>>>>>>>>>> Thanks! >>>>>>>>>>> >>>>>>>>>>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose < >>>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Do you see errors reported in the mount logs for the >>>>>>>>>>>> volume? If so, could you attach the logs? >>>>>>>>>>>> Any issues with your underlying disks. Can you also >>>>>>>>>>>> attach output of volume profiling? >>>>>>>>>>>> >>>>>>>>>>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Ok, things have gotten MUCH worse this morning. I'm >>>>>>>>>>>>> getting random errors from VMs, right now, about a third of my VMs have >>>>>>>>>>>>> been paused due to storage issues, and most of the remaining VMs are not >>>>>>>>>>>>> performing well. >>>>>>>>>>>>> >>>>>>>>>>>>> At this point, I am in full EMERGENCY mode, as my >>>>>>>>>>>>> production services are now impacted, and I'm getting calls coming in with >>>>>>>>>>>>> problems... >>>>>>>>>>>>> >>>>>>>>>>>>> I'd greatly appreciate help...VMs are running VERY >>>>>>>>>>>>> slowly (when they run), and they are steadily getting worse. I don't know >>>>>>>>>>>>> why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for >>>>>>>>>>>>> a few minutes at a time (while the VM became unresponsive and any VMs I was >>>>>>>>>>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>>>>>>>>>> origional post). Is all this storage related? >>>>>>>>>>>>> >>>>>>>>>>>>> I also have two different gluster volumes for VM >>>>>>>>>>>>> storage, and only one had the issues, but now VMs in both are being >>>>>>>>>>>>> affected at the same time and same way. >>>>>>>>>>>>> >>>>>>>>>>>>> --Jim >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose < >>>>>>>>>>>>> sabose@redhat.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> [Adding gluster-users to look at the heal issue] >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>>>>>>>>>> jim@palousetech.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hello: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I've been having some cluster and gluster performance >>>>>>>>>>>>>>> issues lately. I also found that my cluster was out of date, and was >>>>>>>>>>>>>>> trying to apply updates (hoping to fix some of these), and discovered the >>>>>>>>>>>>>>> ovirt 4.1 repos were taken completely offline. So, I was forced to begin >>>>>>>>>>>>>>> an upgrade to 4.2. According to docs I found/read, I needed only add the >>>>>>>>>>>>>>> new repo, do a yum update, reboot, and be good on my hosts (did the yum >>>>>>>>>>>>>>> update, the engine-setup on my hosted engine). Things seemed to work >>>>>>>>>>>>>>> relatively well, except for a gluster sync issue that showed up. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My cluster is a 3 node hyperconverged cluster. I >>>>>>>>>>>>>>> upgraded the hosted engine first, then engine 3. When engine 3 came back >>>>>>>>>>>>>>> up, for some reason one of my gluster volumes would not sync. Here's >>>>>>>>>>>>>>> sample output: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>>>>>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/b1ea3f62-0f05-4 >>>>>>>>>>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a >>>>>>>>>>>>>>> -499f-431d-9013-5453db93ed32 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/8c8b5147-e9d6-4 >>>>>>>>>>>>>>> 810-b45b-185e3ed65727/16f08231 >>>>>>>>>>>>>>> -93b0-489d-a2fd-687b6bf88eaa >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/12924435-b9c2-4 >>>>>>>>>>>>>>> aab-ba19-1c1bc31310ef/07b3db69 >>>>>>>>>>>>>>> -440e-491e-854c-bbfa18a7cff2 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>>>>>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad >>>>>>>>>>>>>>> -2606-44ef-9cd3-ae59610a504b >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/647be733-f153-4 >>>>>>>>>>>>>>> cdc-85bd-ba72544c2631/b453a300 >>>>>>>>>>>>>>> -0602-4be1-8310-8bd5abe00971 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>>>>>>>>>> 725-bca5-b3519681cf2f/0d6080b0 >>>>>>>>>>>>>>> -7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/6da854d1-b6be-4 >>>>>>>>>>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f >>>>>>>>>>>>>>> -b7fa-4aa2-b445-6904e31839ba >>>>>>>>>>>>>>> /cc65f671-3377-494a-a7d4-1d9f7 >>>>>>>>>>>>>>> c3ae46c/images/7f647567-d18c-4 >>>>>>>>>>>>>>> 4f1-a58e-9b8865833acb/f9364470 >>>>>>>>>>>>>>> -9770-4bb1-a6b9-a54861849625 >>>>>>>>>>>>>>> Status: Connected >>>>>>>>>>>>>>> Number of entries: 8 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --------- >>>>>>>>>>>>>>> Its been in this state for a couple days now, and >>>>>>>>>>>>>>> bandwidth monitoring shows no appreciable data moving. I've tried >>>>>>>>>>>>>>> repeatedly commanding a full heal from all three clusters in the node. Its >>>>>>>>>>>>>>> always the same files that need healing. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> When running gluster volume heal data-hdd statistics, >>>>>>>>>>>>>>> I see sometimes different information, but always some number of "heal >>>>>>>>>>>>>>> failed" entries. It shows 0 for split brain. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm not quite sure what to do. I suspect it may be >>>>>>>>>>>>>>> due to nodes 1 and 2 still being on the older ovirt/gluster release, but >>>>>>>>>>>>>>> I'm afraid to upgrade and reboot them until I have a good gluster sync >>>>>>>>>>>>>>> (don't need to create a split brain issue). How do I proceed with this? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Second issue: I've been experiencing VERY POOR >>>>>>>>>>>>>>> performance on most of my VMs. To the tune that logging into a windows 10 >>>>>>>>>>>>>>> vm via remote desktop can take 5 minutes, launching quickbooks inside said >>>>>>>>>>>>>>> vm can easily take 10 minutes. On some linux VMs, I get random messages >>>>>>>>>>>>>>> like this: >>>>>>>>>>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>>>>>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft >>>>>>>>>>>>>>> lockup - CPU#0 stuck for 22s! [mongod:14766] >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> (the process and PID are often different) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm not quite sure what to do about this either. My >>>>>>>>>>>>>>> initial thought was upgrad everything to current and see if its still >>>>>>>>>>>>>>> there, but I cannot move forward with that until my gluster is healed... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>> --Jim >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>>>>>>> vacy-policy/ >>>>>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>>>>>>> e/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list -- users@ovirt.org >>>>>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/pri >>>>>>>>>> vacy-policy/ >>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>>>>> y/about/community-guidelines/ >>>>>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>>>>> es/list/users@ovirt.org/messag >>>>>>>>>> e/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>> y/about/community-guidelines/ >>>> List Archives: https://lists.ovirt.org/archiv >>>> es/list/users@ovirt.org/message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/ >>>> >>>> >>> >> >

Is storage working as it should? Does the gluster mount point respond as it should? Can you write files to it? Does the physical drives say that they are ok? Can you write (you shouldn't bypass gluster mount point but you need to test the drives) to the physical drives? For me this sounds like broken or almost broken hardware or broken underlying filesystems. If one of the drives malfunction and timeout, gluster will be slow and timeout. It runs write in sync so the slowest node will slow down the whole system. /Johan On May 30, 2018 08:29:46 Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote: Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote: I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x23/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1/0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x21/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote: I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote: Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote: I would check disks status and accessibility of mount points where your gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote: On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote: I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote: Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote: Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote: [Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote: Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-42a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-446b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-44f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LEV6ZQ3JV2XLA...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACO7RFSLBSRBAI...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JKBSLSZFRMFGIL...

At the moment, it is responding like I would expect. I do know I have one failed drive on one brick (hardware failure, OS removed drive completely; the underlying /dev/sdb is gone). I have a new disk on order (overnight), but that is also one brick of one volume that is replica 3, so I would hope a complete failure like that would restore the system to operational capabilities. Since having the gluster-volume-starting problems, I have performed a test in the engine volume with writing and removing a file and verifying its happening from all three hosts; that worked. The engine volume has all of its bricks, as well two other volumes; its only one volume that is shy one brick. --Jim On Tue, May 29, 2018 at 11:41 PM, Johan Bernhardsson <johan@kafit.se> wrote:
Is storage working as it should? Does the gluster mount point respond as it should? Can you write files to it? Does the physical drives say that they are ok? Can you write (you shouldn't bypass gluster mount point but you need to test the drives) to the physical drives?
For me this sounds like broken or almost broken hardware or broken underlying filesystems.
If one of the drives malfunction and timeout, gluster will be slow and timeout. It runs write in sync so the slowest node will slow down the whole system.
/Johan
On May 30, 2018 08:29:46 Jim Kusznir <jim@palousetech.com> wrote:
hosted-engine --deploy failed (would not come up on my existing gluster storage). However, I realized no changes were written to my existing storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to return, and it returns stail information everywhere. I thought perhaps the lockspace is corrupt, so tried to clean that and metadata, but both are failing (--cleam-metadata has hung and I can't even ctrl-c out of it).
How can I reinitialize all the lockspace/metadata safely? There is no engine or VMs running currently....
--Jim
On Tue, May 29, 2018 at 9:33 PM, Jim Kusznir <jim@palousetech.com> wrote:
Well, things went from bad to very, very bad....
It appears that during one of the 2 minute lockups, the fencing agents decided that another node in the cluster was down. As a result, 2 of the 3 nodes were simultaneously reset with fencing agent reboot. After the nodes came back up, the engine would not start. All running VMs (including VMs on the 3rd node that was not rebooted) crashed.
I've now been working for about 3 hours trying to get the engine to come up. I don't know why it won't start. hosted-engine --vm-start says its starting, but it doesn't start (virsh doesn't show any VMs running). I'm currently running --deploy, as I had run out of options for anything else I can come up with. I hope this will allow me to re-import all my existing VMs and allow me to start them back up after everything comes back up.
I do have an unverified geo-rep backup; I don't know if it is a good backup (there were several prior messages to this list, but I didn't get replies to my questions. It was running in what I believe to be "strange", and the data directories are larger than their source).
I'll see if my --deploy works, and if not, I'll be back with another message/help request.
When the dust settles and I'm at least minimally functional again, I really want to understand why all these technologies designed to offer redundancy conspired to reduce uptime and create failures where there weren't any otherwise. I thought with hosted engine, 3 ovirt servers and glusterfs with minimum replica 2+arb or replica 3 should have offered strong resilience against server failure or disk failure, and should have prevented / recovered from data corruption. Instead, all of the above happened (once I get my cluster back up, I still have to try and recover my webserver VM, which won't boot due to XFS corrupt journal issues created during the gluster crashes). I think a lot of these issues were rooted from the upgrade from 4.1 to 4.2.
--Jim
On Tue, May 29, 2018 at 6:25 PM, Jim Kusznir <jim@palousetech.com> wrote:
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.527144] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10679.527150] Call Trace: [10679.527161] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.527218] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.527225] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.527254] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.527260] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.527268] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.527271] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [10679.527275] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.527279] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.527283] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [10679.528608] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.529956] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [10679.529961] Call Trace: [10679.529966] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.530003] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.530008] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.530038] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.530042] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.530046] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.530050] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.530054] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.530058] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10679.530062] INFO: task glusteriotwr13:15486 blocked for more than 120 seconds. [10679.531805] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10679.533732] glusteriotwr13 D ffff9720a83f0000 0 15486 1 0x00000080 [10679.533738] Call Trace: [10679.533747] [<ffffffffb9913f79>] schedule+0x29/0x70 [10679.533799] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [10679.533806] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10679.533846] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [10679.533852] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [10679.533858] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10679.533863] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [10679.533868] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10679.533873] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.512757] INFO: task glusterclogro:14933 blocked for more than 120 seconds. [10919.514714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10919.516663] glusterclogro D ffff97209832bf40 0 14933 1 0x00000080 [10919.516677] Call Trace: [10919.516690] [<ffffffffb9913f79>] schedule+0x29/0x70 [10919.516696] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [10919.516703] [<ffffffffb951cc04>] ? blk_finish_plug+0x14/0x40 [10919.516768] [<ffffffffc05e9224>] ? _xfs_buf_ioapply+0x334/0x460 [xfs] [10919.516774] [<ffffffffb991432d>] wait_for_completion+0xfd/0x140 [10919.516782] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [10919.516821] [<ffffffffc05eb0a3>] ? _xfs_buf_read+0x23/0x40 [xfs] [10919.516859] [<ffffffffc05eafa9>] xfs_buf_submit_wait+0xf9/0x1d0 [xfs] [10919.516902] [<ffffffffc061b279>] ? xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.516940] [<ffffffffc05eb0a3>] _xfs_buf_read+0x23/0x40 [xfs] [10919.516977] [<ffffffffc05eb1b9>] xfs_buf_read_map+0xf9/0x160 [xfs] [10919.517022] [<ffffffffc061b279>] xfs_trans_read_buf_map+0x199/0x400 [xfs] [10919.517057] [<ffffffffc05c8d04>] xfs_da_read_buf+0xd4/0x100 [xfs] [10919.517091] [<ffffffffc05c8d53>] xfs_da3_node_read+0x23/0xd0 [xfs] [10919.517126] [<ffffffffc05c9fee>] xfs_da3_node_lookup_int+0x6e/0x2f0 [xfs] [10919.517160] [<ffffffffc05d5a1d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs] [10919.517194] [<ffffffffc05ccf5d>] xfs_dir_lookup+0x1bd/0x1e0 [xfs] [10919.517233] [<ffffffffc05fd8d9>] xfs_lookup+0x69/0x140 [xfs] [10919.517271] [<ffffffffc05fa018>] xfs_vn_lookup+0x78/0xc0 [xfs] [10919.517278] [<ffffffffb9425cf3>] lookup_real+0x23/0x60 [10919.517283] [<ffffffffb9426702>] __lookup_hash+0x42/0x60 [10919.517288] [<ffffffffb942d519>] SYSC_renameat2+0x3a9/0x5a0 [10919.517296] [<ffffffffb94d3753>] ? selinux_file_free_security+0x2 3/0x30 [10919.517304] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517309] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517313] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [10919.517318] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [10919.517323] [<ffffffffb942e58e>] SyS_renameat2+0xe/0x10 [10919.517328] [<ffffffffb942e5ce>] SyS_rename+0x1e/0x20 [10919.517333] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [10919.517339] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11159.496095] INFO: task glusteriotwr9:15482 blocked for more than 120 seconds. [11159.497546] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11159.498978] glusteriotwr9 D ffff971fa0fa1fa0 0 15482 1 0x00000080 [11159.498984] Call Trace: [11159.498995] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.498999] [<ffffffffb9913f79>] schedule+0x29/0x70 [11159.499003] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11159.499056] [<ffffffffc05dd9b7>] ? xfs_iext_bno_to_ext+0xa7/0x1a0 [xfs] [11159.499082] [<ffffffffc05dd43e>] ? xfs_iext_bno_to_irec+0x8e/0xd0 [xfs] [11159.499090] [<ffffffffb92f7a12>] ? ktime_get_ts64+0x52/0xf0 [11159.499093] [<ffffffffb9911f00>] ? bit_wait+0x50/0x50 [11159.499097] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11159.499101] [<ffffffffb9913528>] io_schedule+0x18/0x20 [11159.499104] [<ffffffffb9911f11>] bit_wait_io+0x11/0x50 [11159.499107] [<ffffffffb9911ac1>] __wait_on_bit_lock+0x61/0xc0 [11159.499113] [<ffffffffb9393634>] __lock_page+0x74/0x90 [11159.499118] [<ffffffffb92bc210>] ? wake_bit_function+0x40/0x40 [11159.499121] [<ffffffffb9394154>] __find_lock_page+0x54/0x70 [11159.499125] [<ffffffffb9394e85>] grab_cache_page_write_begin+0x 55/0xc0 [11159.499130] [<ffffffffb9484b76>] iomap_write_begin+0x66/0x100 [11159.499135] [<ffffffffb9484edf>] iomap_write_actor+0xcf/0x1d0 [11159.499140] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499144] [<ffffffffb94854e7>] iomap_apply+0xb7/0x150 [11159.499149] [<ffffffffb9485621>] iomap_file_buffered_write+0xa1 /0xe0 [11159.499153] [<ffffffffb9484e10>] ? iomap_write_end+0x80/0x80 [11159.499182] [<ffffffffc05f025d>] xfs_file_buffered_aio_write+0x12d/0x2c0 [xfs] [11159.499213] [<ffffffffc05f057d>] xfs_file_aio_write+0x18d/0x1b0 [xfs] [11159.499217] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11159.499222] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11159.499225] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11159.499230] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11159.499234] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11159.499238] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.488720] INFO: task xfsaild/dm-10:1134 blocked for more than 120 seconds. [11279.490197] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.491665] xfsaild/dm-10 D ffff9720a8660fd0 0 1134 2 0x00000000 [11279.491671] Call Trace: [11279.491682] [<ffffffffb92a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [11279.491688] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.491744] [<ffffffffc060de36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [11279.491750] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.491783] [<ffffffffc0619fec>] ? xfsaild+0x16c/0x6f0 [xfs] [11279.491817] [<ffffffffc060df5c>] xfs_log_force+0x2c/0x70 [xfs] [11279.491849] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491880] [<ffffffffc0619fec>] xfsaild+0x16c/0x6f0 [xfs] [11279.491913] [<ffffffffc0619e80>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [11279.491919] [<ffffffffb92bb161>] kthread+0xd1/0xe0 [11279.491926] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491932] [<ffffffffb9920677>] ret_from_fork_nospec_begin+0x2 1/0x21 [11279.491936] [<ffffffffb92bb090>] ? insert_kthread_work+0x40/0x40 [11279.491976] INFO: task glusterclogfsyn:14934 blocked for more than 120 seconds. [11279.493466] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.494952] glusterclogfsyn D ffff97209832af70 0 14934 1 0x00000080 [11279.494957] Call Trace: [11279.494979] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.494983] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.494987] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.494997] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.495001] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.495005] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.495010] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.495016] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.495049] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.495079] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.495086] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.495090] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.495094] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.495098] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.495102] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.495105] INFO: task glusterposixfsy:14941 blocked for more than 120 seconds. [11279.496606] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.498114] glusterposixfsy D ffff972495f84f10 0 14941 1 0x00000080 [11279.498118] Call Trace: [11279.498134] [<ffffffffc0309839>] ? __split_and_process_bio+0x2e9/0x520 [dm_mod] [11279.498138] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.498142] [<ffffffffb99118e9>] schedule_timeout+0x239/0x2c0 [11279.498152] [<ffffffffc0309d98>] ? dm_make_request+0x128/0x1a0 [dm_mod] [11279.498156] [<ffffffffb991348d>] io_schedule_timeout+0xad/0x130 [11279.498160] [<ffffffffb99145ad>] wait_for_completion_io+0xfd/0x140 [11279.498165] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.498169] [<ffffffffb951e574>] blkdev_issue_flush+0xb4/0x110 [11279.498202] [<ffffffffc06064b9>] xfs_blkdev_issue_flush+0x19/0x20 [xfs] [11279.498231] [<ffffffffc05eec40>] xfs_file_fsync+0x1b0/0x1e0 [xfs] [11279.498238] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.498242] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.498246] [<ffffffffb944f3f3>] SyS_fdatasync+0x13/0x20 [11279.498250] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.498254] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.498257] INFO: task glusteriotwr1:14950 blocked for more than 120 seconds. [11279.499789] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.501343] glusteriotwr1 D ffff97208b6daf70 0 14950 1 0x00000080 [11279.501348] Call Trace: [11279.501353] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.501390] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.501396] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.501428] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.501432] [<ffffffffb944ef3f>] generic_write_sync+0x4f/0x70 [11279.501461] [<ffffffffc05f0545>] xfs_file_aio_write+0x155/0x1b0 [xfs] [11279.501466] [<ffffffffb941a533>] do_sync_write+0x93/0xe0 [11279.501471] [<ffffffffb941b010>] vfs_write+0xc0/0x1f0 [11279.501475] [<ffffffffb941c002>] SyS_pwrite64+0x92/0xc0 [11279.501479] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.501483] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.501489] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [11279.501493] INFO: task glusteriotwr4:14953 blocked for more than 120 seconds. [11279.503047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11279.504630] glusteriotwr4 D ffff972499f2bf40 0 14953 1 0x00000080 [11279.504635] Call Trace: [11279.504640] [<ffffffffb9913f79>] schedule+0x29/0x70 [11279.504676] [<ffffffffc060e388>] _xfs_log_force_lsn+0x2e8/0x340 [xfs] [11279.504681] [<ffffffffb92cf1b0>] ? wake_up_state+0x20/0x20 [11279.504710] [<ffffffffc05eeb97>] xfs_file_fsync+0x107/0x1e0 [xfs] [11279.504714] [<ffffffffb944f0e7>] do_fsync+0x67/0xb0 [11279.504718] [<ffffffffb992076f>] ? system_call_after_swapgs+0xbc/ 0x160 [11279.504722] [<ffffffffb944f3d0>] SyS_fsync+0x10/0x20 [11279.504725] [<ffffffffb992082f>] system_call_fastpath+0x1c/0x21 [11279.504730] [<ffffffffb992077b>] ? system_call_after_swapgs+0xc8/ 0x160 [12127.466494] perf: interrupt took too long (8263 > 8150), lowering kernel.perf_event_max_sample_rate to 24000
-------------------- I think this is the cause of the massive ovirt performance issues irrespective of gluster volume. At the time this happened, I was also ssh'ed into the host, and was doing some rpm querry commands. I had just run rpm -qa |grep glusterfs (to verify what version was actually installed), and that command took almost 2 minutes to return! Normally it takes less than 2 seconds. That is all pure local SSD IO, too....
I'm no expert, but its my understanding that anytime a software causes these kinds of issues, its a serious bug in the software, even if its mis-handled exceptions. Is this correct?
--Jim
On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim@palousetech.com> wrote:
I think this is the profile information for one of the volumes that lives on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info Brick: ovirt2.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 1113 302
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 88608 53526 No. of Writes: 522 812340 76257
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 54351 241901 15024 No. of Writes: 21636 8656 8976
Block Size: 131072b+ No. of Reads: 524156 No. of Writes: 296071 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4189 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1257 RELEASEDIR 0.00 46.19 us 12.00 us 187.00 us 69 FLUSH 0.00 147.00 us 78.00 us 367.00 us 86 REMOVEXATTR 0.00 223.46 us 24.00 us 1166.00 us 149 READDIR 0.00 565.34 us 76.00 us 3639.00 us 88 FTRUNCATE 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 98.84 us 2.00 us 880.00 us 1198 OPENDIR 0.00 91.59 us 26.00 us 10371.00 us 3853 STATFS 0.00 494.14 us 17.00 us 193439.00 us 1171 GETXATTR 0.00 299.42 us 35.00 us 9799.00 us 2044 READDIRP 0.00 1965.31 us 110.00 us 382258.00 us 321 XATTROP 0.01 113.40 us 24.00 us 61061.00 us 8134 STAT 0.01 755.38 us 57.00 us 607603.00 us 3196 DISCARD 0.05 2690.09 us 58.00 us 2704761.00 us 3206 OPEN 0.10 119978.25 us 97.00 us 9406684.00 us 154 SETATTR 0.18 101.73 us 28.00 us 700477.00 us 313379 FSTAT 0.23 1059.84 us 25.00 us 2716124.00 us 38255 LOOKUP 0.47 1024.11 us 54.00 us 6197164.00 us 81455 FXATTROP 1.72 2984.00 us 15.00 us 37098954.00 us 103020 FINODELK 5.92 44315.32 us 51.00 us 24731536.00 us 23957 FSYNC 13.27 2399.78 us 25.00 us 22089540.00 us 991005 READ 37.00 5980.43 us 52.00 us 22099889.00 us 1108976 WRITE 41.04 5452.75 us 13.00 us 22102452.00 us 1349053 INODELK
Duration: 10026 seconds Data Read: 80046027759 bytes Data Written: 44496632320 bytes
Interval 1 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 983 2696 1059 No. of Writes: 0 838 185
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 852 85856 51575 No. of Writes: 382 705802 57812
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 52673 232093 14984 No. of Writes: 13499 4908 4242
Block Size: 131072b+ No. of Reads: 460040 No. of Writes: 6411 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 53.38 us 26.00 us 111.00 us 16 FLUSH 0.00 145.14 us 78.00 us 367.00 us 71 REMOVEXATTR 0.00 190.96 us 114.00 us 298.00 us 71 SETATTR 0.00 213.38 us 24.00 us 1145.00 us 90 READDIR 0.00 263.28 us 20.00 us 28385.00 us 228 LK 0.00 101.76 us 2.00 us 880.00 us 1093 OPENDIR 0.01 93.60 us 27.00 us 10371.00 us 3090 STATFS 0.02 537.47 us 17.00 us 193439.00 us 1038 GETXATTR 0.03 297.44 us 35.00 us 9799.00 us 1990 READDIRP 0.03 2357.28 us 110.00 us 382258.00 us 253 XATTROP 0.04 385.93 us 58.00 us 47593.00 us 2091 OPEN 0.04 114.86 us 24.00 us 61061.00 us 7715 STAT 0.06 444.59 us 57.00 us 333240.00 us 3053 DISCARD 0.42 316.24 us 25.00 us 290728.00 us 29823 LOOKUP 0.73 257.92 us 54.00 us 344812.00 us 63296 FXATTROP 1.37 98.30 us 28.00 us 67621.00 us 313172 FSTAT 1.58 2124.69 us 51.00 us 849200.00 us 16717 FSYNC 5.73 162.46 us 52.00 us 748492.00 us 794079 WRITE 7.19 2065.17 us 16.00 us 37098954.00 us 78381 FINODELK 36.44 886.32 us 25.00 us 2216436.00 us 925421 READ 46.30 1178.04 us 13.00 us 1700704.00 us 884635 INODELK
Duration: 7485 seconds Data Read: 71250527215 bytes Data Written: 5119903744 bytes
Brick: ovirt3.nwfiber.com:/gluster/brick2/data ---------------------------------------------- Cumulative Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 3264419 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 90 FORGET 0.00 0.00 us 0.00 us 0.00 us 9462 RELEASE 0.00 0.00 us 0.00 us 0.00 us 4254 RELEASEDIR 0.00 50.52 us 13.00 us 190.00 us 71 FLUSH 0.00 186.97 us 87.00 us 713.00 us 86 REMOVEXATTR 0.00 79.32 us 33.00 us 189.00 us 228 LK 0.00 220.98 us 129.00 us 513.00 us 86 SETATTR 0.01 259.30 us 26.00 us 2632.00 us 137 READDIR 0.02 322.76 us 145.00 us 2125.00 us 321 XATTROP 0.03 109.55 us 2.00 us 1258.00 us 1193 OPENDIR 0.05 70.21 us 21.00 us 431.00 us 3196 DISCARD 0.05 169.26 us 21.00 us 2315.00 us 1545 GETXATTR 0.12 176.85 us 63.00 us 2844.00 us 3206 OPEN 0.61 303.49 us 90.00 us 3085.00 us 9633 FSTAT 2.44 305.66 us 28.00 us 3716.00 us 38230 LOOKUP 4.52 266.22 us 55.00 us 53424.00 us 81455 FXATTROP 6.96 1397.99 us 51.00 us 64822.00 us 23889 FSYNC 16.48 84.74 us 25.00 us 6917.00 us 932592 WRITE 30.16 106.90 us 13.00 us 3920189.00 us 1353046 INODELK 38.55 1794.52 us 14.00 us 16210553.00 us 103039 FINODELK
Duration: 66562 seconds Data Read: 0 bytes Data Written: 3264419 bytes
Interval 1 Stats: Block Size: 1b+ No. of Reads: 0 No. of Writes: 794080 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2093 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1093 RELEASEDIR 0.00 70.31 us 26.00 us 125.00 us 16 FLUSH 0.00 193.10 us 103.00 us 713.00 us 71 REMOVEXATTR 0.01 227.32 us 133.00 us 513.00 us 71 SETATTR 0.01 79.32 us 33.00 us 189.00 us 228 LK 0.01 259.83 us 35.00 us 1138.00 us 89 READDIR 0.03 318.26 us 145.00 us 2047.00 us 253 XATTROP 0.04 112.67 us 3.00 us 1258.00 us 1093 OPENDIR 0.06 167.98 us 23.00 us 1951.00 us 1014 GETXATTR 0.08 70.97 us 22.00 us 431.00 us 3053 DISCARD 0.13 183.78 us 66.00 us 2844.00 us 2091 OPEN 1.01 303.82 us 90.00 us 3085.00 us 9610 FSTAT 3.27 316.59 us 30.00 us 3716.00 us 29820 LOOKUP 5.83 265.79 us 59.00 us 53424.00 us 63296 FXATTROP 7.95 1373.89 us 51.00 us 64822.00 us 16717 FSYNC 23.17 851.99 us 14.00 us 16210553.00 us 78555 FINODELK 24.04 87.44 us 27.00 us 6917.00 us 794081 WRITE 34.36 111.91 us 14.00 us 984871.00 us 886790 INODELK
Duration: 7485 seconds Data Read: 0 bytes Data Written: 794080 bytes
----------------------- Here is the data from the volume that is backed by the SHDDs and has one failed disk: [root@ovirt2 yum.repos.d]# gluster volume profile data-hdd info Brick: 172.172.1.12:/gluster/brick3/data-hdd -------------------------------------------- Cumulative Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.74 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2331.74 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.75 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.19 us 51.00 us 3595513.00 us 131642 WRITE 17.70 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.55 2546.43 us 26.00 us 5077347.00 us 786060 READ 31.56 49699.15 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
Interval 0 Stats: Block Size: 256b+ 512b+ 1024b+ No. of Reads: 1702 86 16 No. of Writes: 0 767 71
Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 19 51841 2049 No. of Writes: 76 60668 35727
Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 1744 639 1088 No. of Writes: 8524 2410 1285
Block Size: 131072b+ No. of Reads: 771999 No. of Writes: 29584 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2902 RELEASE 0.00 0.00 us 0.00 us 0.00 us 1517 RELEASEDIR 0.00 197.00 us 197.00 us 197.00 us 1 FTRUNCATE 0.00 70.24 us 16.00 us 758.00 us 51 FLUSH 0.00 143.93 us 82.00 us 305.00 us 57 REMOVEXATTR 0.00 178.63 us 105.00 us 712.00 us 60 SETATTR 0.00 67.30 us 19.00 us 572.00 us 555 LK 0.00 322.80 us 23.00 us 4673.00 us 138 READDIR 0.00 336.56 us 106.00 us 11994.00 us 237 XATTROP 0.00 84.70 us 28.00 us 1071.00 us 3469 STATFS 0.01 387.75 us 2.00 us 146017.00 us 1467 OPENDIR 0.01 148.59 us 21.00 us 64374.00 us 4454 STAT 0.02 783.02 us 16.00 us 93502.00 us 1902 GETXATTR 0.03 1516.10 us 17.00 us 210690.00 us 1364 ENTRYLK 0.03 2555.47 us 300.00 us 674454.00 us 1064 READDIRP 0.07 85.73 us 19.00 us 68340.00 us 62849 FSTAT 0.07 1978.12 us 59.00 us 202596.00 us 2729 OPEN 0.22 708.57 us 15.00 us 394799.00 us 25447 LOOKUP 5.94 2334.57 us 15.00 us 1099530.00 us 207534 FINODELK 7.31 8311.49 us 58.00 us 1800216.00 us 71668 FXATTROP 12.49 7735.32 us 51.00 us 3595513.00 us 131642 WRITE 17.71 957.08 us 16.00 us 13700466.00 us 1508160 INODELK 24.56 2546.42 us 26.00 us 5077347.00 us 786060 READ 31.54 49651.63 us 47.00 us 3746331.00 us 51777 FSYNC
Duration: 10101 seconds Data Read: 101562897361 bytes Data Written: 4834450432 bytes
On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim@palousetech.com> wrote:
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica 3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same in all three machines). On ovirt3, the SSHD has reported hard IO failures, and that brick is offline. However, the other two replicas are fully operational (although they still show contents in the heal info command that won't go away, but that may be the case until I replace the failed disk).
What is bothering me is that ALL 4 gluster volumes are showing horrible performance issues. At this point, as the bad disk has been completely offlined, I would expect gluster to perform at normal speed, but that is definitely not the case.
I've also noticed that the performance hits seem to come in waves: things seem to work acceptably (but slow) for a while, then suddenly, its as if all disk IO on all volumes (including non-gluster local OS disk volumes for the hosts) pause for about 30 seconds, then IO resumes again. During those times, I start getting VM not responding and host not responding notices as well as the applications having major issues.
I've shut down most of my VMs and am down to just my essential core VMs (shedded about 75% of my VMs). I still am experiencing the same issues.
Am I correct in believing that once the failed disk was brought offline that performance should return to normal?
On Tue, May 29, 2018 at 1:27 PM, Alex K <rightkicktech@gmail.com> wrote:
> I would check disks status and accessibility of mount points where > your gluster volumes reside. > > On Tue, May 29, 2018, 22:28 Jim Kusznir <jim@palousetech.com> wrote: > >> On one ovirt server, I'm now seeing these messages: >> [56474.239725] blk_update_request: 63 callbacks suppressed >> [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.240602] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.241346] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 >> [56474.243072] blk_update_request: I/O error, dev dm-2, sector >> 3905943424 >> [56474.243997] blk_update_request: I/O error, dev dm-2, sector >> 3905943536 >> [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 >> [56474.248315] blk_update_request: I/O error, dev dm-2, sector >> 3905945472 >> [56474.249231] blk_update_request: I/O error, dev dm-2, sector >> 3905945584 >> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048 >> >> >> >> >> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> >> wrote: >> >>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to >>> 4.2): >>> >>> May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: >>> database connection failed (No such file or directory) >>> (appears a lot). >>> >>> I also found on the ssh session of that, some sysv warnings about >>> the backing disk for one of the gluster volumes (straight replica 3). The >>> glusterfs process for that disk on that machine went offline. Its my >>> understanding that it should continue to work with the other two machines >>> while I attempt to replace that disk, right? Attempted writes (touching an >>> empty file) can take 15 seconds, repeating it later will be much faster. >>> >>> Gluster generates a bunch of different log files, I don't know >>> what ones you want, or from which machine(s). >>> >>> How do I do "volume profiling"? >>> >>> Thanks! >>> >>> On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> >>> wrote: >>> >>>> Do you see errors reported in the mount logs for the volume? If >>>> so, could you attach the logs? >>>> Any issues with your underlying disks. Can you also attach output >>>> of volume profiling? >>>> >>>> On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir < >>>> jim@palousetech.com> wrote: >>>> >>>>> Ok, things have gotten MUCH worse this morning. I'm getting >>>>> random errors from VMs, right now, about a third of my VMs have been paused >>>>> due to storage issues, and most of the remaining VMs are not performing >>>>> well. >>>>> >>>>> At this point, I am in full EMERGENCY mode, as my production >>>>> services are now impacted, and I'm getting calls coming in with problems... >>>>> >>>>> I'd greatly appreciate help...VMs are running VERY slowly (when >>>>> they run), and they are steadily getting worse. I don't know why. I was >>>>> seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few >>>>> minutes at a time (while the VM became unresponsive and any VMs I was >>>>> logged into that were linux were giving me the CPU stuck messages in my >>>>> origional post). Is all this storage related? >>>>> >>>>> I also have two different gluster volumes for VM storage, and >>>>> only one had the issues, but now VMs in both are being affected at the same >>>>> time and same way. >>>>> >>>>> --Jim >>>>> >>>>> On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com >>>>> > wrote: >>>>> >>>>>> [Adding gluster-users to look at the heal issue] >>>>>> >>>>>> On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir < >>>>>> jim@palousetech.com> wrote: >>>>>> >>>>>>> Hello: >>>>>>> >>>>>>> I've been having some cluster and gluster performance issues >>>>>>> lately. I also found that my cluster was out of date, and was trying to >>>>>>> apply updates (hoping to fix some of these), and discovered the ovirt 4.1 >>>>>>> repos were taken completely offline. So, I was forced to begin an upgrade >>>>>>> to 4.2. According to docs I found/read, I needed only add the new repo, do >>>>>>> a yum update, reboot, and be good on my hosts (did the yum update, the >>>>>>> engine-setup on my hosted engine). Things seemed to work relatively well, >>>>>>> except for a gluster sync issue that showed up. >>>>>>> >>>>>>> My cluster is a 3 node hyperconverged cluster. I upgraded the >>>>>>> hosted engine first, then engine 3. When engine 3 came back up, for some >>>>>>> reason one of my gluster volumes would not sync. Here's sample output: >>>>>>> >>>>>>> [root@ovirt3 ~]# gluster volume heal data-hdd info >>>>>>> Brick 172.172.1.11:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.12:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> Brick 172.172.1.13:/gluster/brick3/data-hdd >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 >>>>>>> ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 >>>>>>> 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 >>>>>>> aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 >>>>>>> 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 >>>>>>> cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 >>>>>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 >>>>>>> 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba >>>>>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 >>>>>>> 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 >>>>>>> Status: Connected >>>>>>> Number of entries: 8 >>>>>>> >>>>>>> --------- >>>>>>> Its been in this state for a couple days now, and bandwidth >>>>>>> monitoring shows no appreciable data moving. I've tried repeatedly >>>>>>> commanding a full heal from all three clusters in the node. Its always the >>>>>>> same files that need healing. >>>>>>> >>>>>>> When running gluster volume heal data-hdd statistics, I see >>>>>>> sometimes different information, but always some number of "heal failed" >>>>>>> entries. It shows 0 for split brain. >>>>>>> >>>>>>> I'm not quite sure what to do. I suspect it may be due to >>>>>>> nodes 1 and 2 still being on the older ovirt/gluster release, but I'm >>>>>>> afraid to upgrade and reboot them until I have a good gluster sync (don't >>>>>>> need to create a split brain issue). How do I proceed with this? >>>>>>> >>>>>>> Second issue: I've been experiencing VERY POOR performance on >>>>>>> most of my VMs. To the tune that logging into a windows 10 vm via remote >>>>>>> desktop can take 5 minutes, launching quickbooks inside said vm can easily >>>>>>> take 10 minutes. On some linux VMs, I get random messages like this: >>>>>>> Message from syslogd@unifi at May 28 20:39:23 ... >>>>>>> kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - >>>>>>> CPU#0 stuck for 22s! [mongod:14766] >>>>>>> >>>>>>> (the process and PID are often different) >>>>>>> >>>>>>> I'm not quite sure what to do about this either. My initial >>>>>>> thought was upgrad everything to current and see if its still there, but I >>>>>>> cannot move forward with that until my gluster is healed... >>>>>>> >>>>>>> Thanks! >>>>>>> --Jim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list -- users@ovirt.org >>>>>>> To unsubscribe send an email to users-leave@ovirt.org >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit >>>>>>> y/about/community-guidelines/ >>>>>>> List Archives: https://lists.ovirt.org/archiv >>>>>>> es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOT >>>>>>> IRQF/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/ACO7RFSLBSRBAIONIC2HQ6Z24ZDES5MF/ >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/JKBSLSZFRMFGILXGZB4YAMJIJKEKNGMA/

Due to the cluster spiraling downward and increasing customer complaints, I went ahead and finished the upgrade of the nodes to ovirt 4.2 and gluster 3.12. It didn't seem to help at all. I DO have one brick down on ONE of my 4 gluster filesystems/exports/whatever. The other 3 are fully available. However, I still see heavy IO wait, including on the perfectly healthy filesystem. its bad enough that I get ovirt e-mails warning of hosts down and back up, and VMs on the good gluster filesystem are reporting IO Waits of greater than 60% in top! I have applications that are crashing due to the IO Wait issues. I do think I got glusterfs profiling running, but I don't know how to get a useful report out (its in the ovirt gui). I did see read and write operations showing about 30 seconds; I would have expected that to be MUCH better. (As I write this, my core VoIP server is now showing 99.1% IOWait load....And that is customer calls failing/dropping....). PLEASE...how do I FIX this? --JIm On Tue, May 29, 2018 at 12:14 PM, Jim Kusznir <jim@palousetech.com> wrote:
On one ovirt server, I'm now seeing these messages: [56474.239725] blk_update_request: 63 callbacks suppressed [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 [56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.241346] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.242236] blk_update_request: I/O error, dev dm-2, sector 2048 [56474.243072] blk_update_request: I/O error, dev dm-2, sector 3905943424 [56474.243997] blk_update_request: I/O error, dev dm-2, sector 3905943536 [56474.247347] blk_update_request: I/O error, dev dm-2, sector 0 [56474.248315] blk_update_request: I/O error, dev dm-2, sector 3905945472 [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584 [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim@palousetech.com> wrote:
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:54:51 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) May 29 11:55:01 ovirt3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) (appears a lot).
I also found on the ssh session of that, some sysv warnings about the backing disk for one of the gluster volumes (straight replica 3). The glusterfs process for that disk on that machine went offline. Its my understanding that it should continue to work with the other two machines while I attempt to replace that disk, right? Attempted writes (touching an empty file) can take 15 seconds, repeating it later will be much faster.
Gluster generates a bunch of different log files, I don't know what ones you want, or from which machine(s).
How do I do "volume profiling"?
Thanks!
On Tue, May 29, 2018 at 11:53 AM, Sahina Bose <sabose@redhat.com> wrote:
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim@palousetech.com> wrote:
Ok, things have gotten MUCH worse this morning. I'm getting random errors from VMs, right now, about a third of my VMs have been paused due to storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are now impacted, and I'm getting calls coming in with problems...
I'd greatly appreciate help...VMs are running VERY slowly (when they run), and they are steadily getting worse. I don't know why. I was seeing CPU peaks (to 100%) on several VMs, in perfect sync, for a few minutes at a time (while the VM became unresponsive and any VMs I was logged into that were linux were giving me the CPU stuck messages in my origional post). Is all this storage related?
I also have two different gluster volumes for VM storage, and only one had the issues, but now VMs in both are being affected at the same time and same way.
--Jim
On Mon, May 28, 2018 at 10:50 PM, Sahina Bose <sabose@redhat.com> wrote:
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim@palousetech.com> wrote:
Hello:
I've been having some cluster and gluster performance issues lately. I also found that my cluster was out of date, and was trying to apply updates (hoping to fix some of these), and discovered the ovirt 4.1 repos were taken completely offline. So, I was forced to begin an upgrade to 4.2. According to docs I found/read, I needed only add the new repo, do a yum update, reboot, and be good on my hosts (did the yum update, the engine-setup on my hosted engine). Things seemed to work relatively well, except for a gluster sync issue that showed up.
My cluster is a 3 node hyperconverged cluster. I upgraded the hosted engine first, then engine 3. When engine 3 came back up, for some reason one of my gluster volumes would not sync. Here's sample output:
[root@ovirt3 ~]# gluster volume heal data-hdd info Brick 172.172.1.11:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 Status: Connected Number of entries: 8
Brick 172.172.1.12:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b Status: Connected Number of entries: 8
Brick 172.172.1.13:/gluster/brick3/data-hdd /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/b1ea3f62-0f05-4 ded-8c82-9c91c90e0b61/d5d6bf5a-499f-431d-9013-5453db93ed32 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/8c8b5147-e9d6-4 810-b45b-185e3ed65727/16f08231-93b0-489d-a2fd-687b6bf88eaa /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/12924435-b9c2-4 aab-ba19-1c1bc31310ef/07b3db69-440e-491e-854c-bbfa18a7cff2 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/f3c8e7aa-6ef2-4 2a7-93d4-e0a4df6dd2fa/2eb0b1ad-2606-44ef-9cd3-ae59610a504b /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-f153-4 cdc-85bd-ba72544c2631/b453a300-0602-4be1-8310-8bd5abe00971 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9 /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/6da854d1-b6be-4 46b-9bf0-90a0dbbea830/3c93bd1f-b7fa-4aa2-b445-6904e31839ba /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/7f647567-d18c-4 4f1-a58e-9b8865833acb/f9364470-9770-4bb1-a6b9-a54861849625 Status: Connected Number of entries: 8
--------- Its been in this state for a couple days now, and bandwidth monitoring shows no appreciable data moving. I've tried repeatedly commanding a full heal from all three clusters in the node. Its always the same files that need healing.
When running gluster volume heal data-hdd statistics, I see sometimes different information, but always some number of "heal failed" entries. It shows 0 for split brain.
I'm not quite sure what to do. I suspect it may be due to nodes 1 and 2 still being on the older ovirt/gluster release, but I'm afraid to upgrade and reboot them until I have a good gluster sync (don't need to create a split brain issue). How do I proceed with this?
Second issue: I've been experiencing VERY POOR performance on most of my VMs. To the tune that logging into a windows 10 vm via remote desktop can take 5 minutes, launching quickbooks inside said vm can easily take 10 minutes. On some linux VMs, I get random messages like this: Message from syslogd@unifi at May 28 20:39:23 ... kernel:[6171996.308904] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [mongod:14766]
(the process and PID are often different)
I'm not quite sure what to do about this either. My initial thought was upgrad everything to current and see if its still there, but I cannot move forward with that until my gluster is healed...
Thanks! --Jim
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/3LEV6ZQ3JV2XLAL7NYBTXOYMYUOTIRQF/
participants (7)
-
Alex K
-
Jayme
-
Jim Kusznir
-
Johan Bernhardsson
-
Krutika Dhananjay
-
Ravishankar N
-
Sahina Bose