Hosted-Engine Down VSDM Cert Expired
by jarredm@ecboces.org
Hey all,
I'm looking to get a bit of guidance here. As the subject suggests, we have a hosted-engine ovirt cluster. I ran into an issue trying to login to the web interface. I was seeing errors about certificate expiration, although I didn't know what cert it was referring to at the time. I ssh'd to the hosted-engine and restarted it. However, once it shutdown, it was unable to start again.
What I've discovered so far is that the hosted-engine is currently residing on node 33 (storage is on a gluster volume) and the vdsm certificate for that node has expired. There are three nodes in total, and two of them have expired certs. However, one of them has a valid cert still. I'm able to run vdsm-client commands on that node. Although I haven't done anything with that yet other than to verify that I'm able to do some of the Host get* commands successfully. I'm wondering if it is possible to "pull" the hosted-engine onto this host and fire it back up there.
Thanks in advance for your help!
I'm gathering log info etc as described and it will be available here: https://drive.google.com/drive/folders/1cBPrN8SuIR-dgnpRKe1eKXRZZTPPshyJ?...
Version info:
Installed Packages
centos-release-gluster8.noarch 1.0-1.el8 @extras
centos-release-storage-common.noarch 2-2.el8 @extras
glusterfs.x86_64 8.6-2.el8 @centos-gluster8
ovirt-release44.noarch 4.4.8.3-1.el8 @@commandline
vdsm.x86_64 4.40.80.6-1.el8 @ovirt-4.4
1 year, 10 months
Re: Cannot prepare internal mirrorlist
by ahmad.hidayat@singtel.com
Good day Nathan!
Firstly, it's polite for me to thank the moderators for approving my post..
Thank you for guiding me on this. I have configured the /etc/environment and /etc/yum.conf with the proxy configurations then curl the mirrorlist site and it was successful.
Our KVM is behind a firewall, do we need to allow the traffic for a specific range of IP addresses for this?
Regards,
Hidayat
1 year, 10 months
ovirtsdk4 python - global maintenance
by yevhen.kyrylchenko@gmail.com
Hi!
I know how to set global maintenance mode using python SDK, something like
vms_service.vm_service(vm.id).maintenance(maintenance_enabled=True)
on HostedEngine.
But now I try to find if global maintenance is enabled.
And I'm going to give up - I can't find how to do it in documentation...
Is there a way to get this info?
Regards!
1 year, 10 months
remote-viewer, vnc console, error certificate's owner does not match hostname
by Jiří Sléžka
Hello,
I have recently enrolled new certificates on all hosts in our RHV
(4.5.3.5-1.el8ev) cluster but now I cannot connect to VNC or SPICE+VNC
console via remote-viewer (virt-viewer-11.0-2.fc36.x86_64) because of error
The certificate's owner does not match hostname '10.224.102.72'
10.224.102.72 is host's ip address
Connection through spice protocol works fine
.vv file looks like
[virt-viewer]
type=vnc
host=10.224.102.72
port=5910
password=*******
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=srv.example.com:%d
toggle-fullscreen=shift+f11
release-cursor=shift+f12
secure-attention=ctrl+alt+end
versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-6;rhel6:99.0-1
newer-version-url=https://rhv.example.com/ovirt-engine/rhv/client-resources
[ovirt]
host=rhv.example.com:443
vm-guid=d9f1e9f8-1111-2222-3333-1c1db6704f21
sso-token=K9r1tHadO7H8oB........JMCSwtcwyD0syaENFA
admin=1
I also tried to copy oVirt's CA cert to ~/.pki/CA/cacert.pem as
mentioned in https://access.redhat.com/solutions/6217601 but error persists.
Debug log looks like
remote-viewer --debug Downloads/console.vv --gtk-vnc-debug
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.160:
../src/vncdisplay.c Connected to VNC server
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.160:
../src/vncconnection.c Protocol initialization
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.160:
../src/vncconnection.c Schedule greeting timeout 0x5621f9d53478
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.161:
../src/vncconnection.c Remove timeout 0x5621f9d53478
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.161:
../src/vncconnection.c Server version: 3.8
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.161:
../src/vncconnection.c Sending full greeting
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.161:
../src/vncconnection.c Using version: 3.8
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.190:
../src/vncconnection.c Possible auth 19
(remote-viewer:2445675): virt-viewer-DEBUG: 14:36:54.191: Allocated 1024x768
(remote-viewer:2445675): virt-viewer-DEBUG: 14:36:54.191: Child allocate
1024x768
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Emit main context 14
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Thinking about auth type 19
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Decided on auth type 19
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Waiting for auth type
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Choose auth 19
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c Checking if credentials are needed
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.192:
../src/vncconnection.c No credentials required
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Possible VeNCrypt sub-auth 261
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Emit main context 15
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Requested auth subtype 261
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Waiting for VeNCrypt auth subtype
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Choose auth subtype 261
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Checking if credentials are needed
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c No credentials required
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.194:
../src/vncconnection.c Do TLS handshake
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Checking if credentials are needed
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Want a TLS clientname
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Requesting missing credentials
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Emit main context 13
(remote-viewer:2445675): virt-viewer-DEBUG: 14:36:54.195: Got VNC
credential request for 1 credential(s)
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Set credential 2 libvirt
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Searching for certs in /etc/pki
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Searching for certs in /home/user/.pki
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Failed to find certificate CA/cacrl.pem
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Failed to find certificate
libvirt/private/clientkey.pem
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Failed to find certificate libvirt/clientcert.pem
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Waiting for missing credentials
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Got all credentials
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c No client cert or key provided
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c No CA revocation list provided
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.195:
../src/vncconnection.c Handshake was blocking
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.197:
../src/vncconnection.c Handshake was blocking
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.199:
../src/vncconnection.c Handshake was blocking
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Handshake was blocking
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Handshake done
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Validating
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Certificate is valid.
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Checking chain 0
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Error: The certificate's owner does not match
hostname '10.224.102.72'
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Emit main context 19
(remote-viewer:2445675): virt-viewer-WARNING **: 14:36:54.200:
vnc-session: got vnc error The certificate's owner does not match
hostname '10.224.102.72'
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncdisplay.c VNC server error
(remote-viewer:2445675): gtk-vnc-DEBUG: 14:36:54.200:
../src/vncconnection.c Auth failed
Also noVNC client throws "Something went wrong, connection is closed"
Certificate on one of hosts looks like
[root@rhev01 ~]# openssl x509 -in
/etc/pki/vdsm/libvirt-vnc/server-cert.pem -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 165 (0xa5)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = SU Opava, CN = CA-rhv.example.com.51627
Validity
Not Before: Jan 11 12:06:21 2023 GMT
Not After : Jan 13 12:06:21 2028 GMT
Subject: O = SU Opava, CN = rhev01.net.slu.cz
...
X509v3 Subject Alternative Name:
DNS:rhev01.net.slu.cz
Yes, certificate has dns name of host inside, .vv file uses an ip
address. Is it a bug? Can I disable hostname checking in remote-viewer
somehow?
Thanks in advance,
Jiri
1 year, 10 months
Updating the DNS configuration for the Hosted Engine
by nathan.english@bt.com
Hi All,
I've had a look through the documentation and not been able to find any up to date information on how to do this.
We've now built a permanent DNS solution and need to update the Hosted Engine DNS server details. Luckily, I have managed to update the hosts configuration using the Data Center Settings so it's just the Hosted Engine to complete.
Any advice on where should I update? I didn't want to update the if config file, as I assumed it's controlled by ansible somewhere!
Kind Regards,
Nathan
1 year, 10 months
Cannot prepare internal mirrorlist
by ahmad.hidayat@singtel.com
Hi Everyone,
I am new here and hoping to get some advise on my issue that i'm encountering.
I am setting up ovirt self hosting engine 4.5.4 and require proxy to access the internet.
Below is the issue i encountered:
[localhost -> 192.168.222.66]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'centos-ceph-pacific': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
Hoping to gain some information to solve it.
Thank you in advance!
1 year, 10 months
Ovirt 4.4.10 AD Integration Error
by hemak88@gmail.com
I am dong AD integration of the Ovirt 4.4 manager. The Insecure method with plain text password saved in /etc/ovirt-engine/aaa/uat.xxxx.com.properties works fine. I am using ovirt-engine-extension-aaa-ldap-setup utility
However this is a hard coding method and insecure way. Hence I wanted to use starttls with PEM encoded certificate file. I obtained a root and intermediate CA from the Ad server and used with starttls
I used below inputs for configuring AD auth with tool "ovirt-engine-extension-aaa-ldap-setup"
Available LDAP implementations:
3 - Active Directory
Please select: 3
Please enter Active Directory Forest name: uat.xxxx.com
Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS
Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): file
File path: /tmp/rootca.pem
Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): myself(a)uat.xxxx.com
Enter search user password:
Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]: No
Please specify profile name that will be visible to users [uat.xxxx.com]:
Please provide credentials to test login flow:
Enter user name: myself(a)uat.xxxx.com
Enter user password:
But I am facing error. What could be the resolution
WARNING: Error while connecting to 'adserver.uat.xxxx.com': LDAPException(resultCode=82 (local error), errorMessage='The connection reader was unable to successfully complete TLS negotiation: SSLHandshakeException(No trusted certificate found), ldapSDKVersion=4.0.14, revision=c0fb784eebf9d36a67c736d0428fb3577f2e25bb')
I did verify the root and intemediate certificate:
# openssl verify -verbose -CAfile uatrootca.pem uatca.pem
uatca.pem: OK
1. What could be the reason for "No trusted certificate found" error?
2. Will this method also save the username and password of AD user as plain text in the file /etc/ovirt-engine/aaa/uat.xxxx.com.properties
1 year, 10 months
oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious
by David Johnson
Good morning folks, and thank you in advance.
I am working on migrating my oVirt backing store from NFS to iSCSI.
*oVirt Environment:*
oVirt Open Virtualization Manager
Software Version:4.4.4.7-1.el8
*TrueNAS environment:*
FreeBSD truenas.local 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS amd64
The iSCSI share is on a TrueNAS server, exposed to user VDSM and group 36.
oVirt sees the targeted share, but is unable to make use of it.
The latest issue is "Error while executing action New SAN Storage Domain:
Volume Group block size error, please check your Volume Group
configuration, Supported block size is 512 bytes."
As near as I can tell, oVirt does not support any block size other than 512
bytes, while TrueNAS's smallest OOB block size is 4k.
I know that oVirt on TrueNAS is a common configuration, so I expect I am
missing something really obvious here, probably a TrueNAS configuration
needed to make TrueNAS work with 512 byte blocks.
Any advice would be helpful.
*David Johnson*
1 year, 10 months
After Upgrade OVirt 4.4.9 > Version 4.4.10.7-1.el8 VM Kernel Crashes after migration
by Ralf Schenk
Dear List,
we upgraded our OVirt 4.4.9 Infrastructure (engine and hosts) to latest
available 4.4.10.x. Engine shows "4.4.10.7-1.el8" on Login Page
Hosts are based on ovirt-node-ng (nodectl info outputs
ovirt-node-ng-4.4.10.2-0.20220303.0)
When we migrate VM's we see VM's dying short after migration with below
errors also printed to console. We are only able to shutdown these VM's
(PowerOff) and start them up again. Mostly they are started again on the
Host that was target of migration and they run without issues then.
All Servers are EPYC based, but different CPU Versions. All hosts are in
a single "Cluster", CPU Type is "Secure AMD EPYC". With 4.4.8 and 4.4.9
we had no problems migrating VM's between hosts.
Cluster is:
2 Hosts: AMD EPYC 7401P 24-Core Processor
1 Hosts: AMD EPYC 7402P 24-Core Processor
7 Hosts: AMD EPYC 7502P 32-Core Processor
What can we do ? As said, we didn't see such problems for 1 year running
4.4.8.x and 4.4.9.x.
Syslog showing crash and date/clock error or sudden offset:
Jan 19 19:23:22 myvmXX kernel: [163933.218848] rcu: INFO: rcu_sched
self-detected stall on CPU
Jan 19 19:23:22 myvmXX kernel: [163933.218876] rcu: 1-...!: (8 GPs
behind) idle=78a/0/0x1 softirq=4456350/4456351 fqs=0
Jan 19 19:23:22 myvmXX kernel: [163933.218901] (t=537752 jiffies
g=9505317 q=22)
Jan 19 19:23:22 myvmXX kernel: [163933.218903] rcu: rcu_sched kthread
starved for 537752 jiffies! g9505317 f0x0 RCU_GP_WAIT_FQS(5)
->state=0x402 ->cpu=0
Jan 19 19:23:22 myvmXX kernel: [163933.218932] rcu: RCU grace-period
kthread stack dump:
Jan 19 19:23:22 myvmXX kernel: [163933.218949] rcu_sched I 0
10 2 0x80004000
Jan 19 19:23:22 myvmXX kernel: [163933.218951] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.218957] __schedule+0x2e3/0x740
Jan 19 19:23:22 myvmXX kernel: [163933.218960] schedule+0x42/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.218961] schedule_timeout+0x8a/0x160
Jan 19 19:23:22 myvmXX kernel: [163933.218964] ?
rcu_accelerate_cbs+0x28/0x190
Jan 19 19:23:22 myvmXX kernel: [163933.218967] ?
__next_timer_interrupt+0xe0/0xe0
Jan 19 19:23:22 myvmXX kernel: [163933.218969] rcu_gp_kthread+0x48d/0x9a0
Jan 19 19:23:22 myvmXX kernel: [163933.218971] kthread+0x104/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.218972] ? kfree_call_rcu+0x20/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.218974] ? kthread_park+0x90/0x90
Jan 19 19:23:22 myvmXX kernel: [163933.218975] ret_from_fork+0x35/0x40
Jan 19 19:23:22 myvmXX kernel: [163933.218982] Sending NMI from CPU 1 to
CPUs 0:
Jan 19 19:23:22 myvmXX kernel: [163933.219978] NMI backtrace for cpu 0
Jan 19 19:23:22 myvmXX kernel: [163933.219978] CPU: 0 PID: 0 Comm:
swapper/0 Not tainted 5.4.0-137-generic #154-Ubuntu
Jan 19 19:23:22 myvmXX kernel: [163933.219979] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Jan 19 19:23:22 myvmXX kernel: [163933.219979] RIP:
0010:timekeeping_advance+0x12f/0x5a0
Jan 19 19:23:22 myvmXX kernel: [163933.219980] Code: 00 48 8b 35 e3 a0
ec 01 bb 00 ca 9a 3b 49 29 c7 48 01 05 04 a0 ec 01 48 01 05 35 a0 ec 01
48 89 f2 48 d3 e2 8b 0d fd 9f ec 01 <48> 03 15 fa 9f ec 01 48 89 15 f3
9f ec 01 48 d3 e3 48 39 da 72 57
Jan 19 19:23:22 myvmXX kernel: [163933.219981] RSP:
0018:ffffb5a580003e40 EFLAGS: 00000016
Jan 19 19:23:22 myvmXX kernel: [163933.219981] RAX: 000000007a120000
RBX: 000000003b9aca00 RCX: 0000000000000017
Jan 19 19:23:22 myvmXX kernel: [163933.219982] RDX: 003d091a39de0000
RSI: 00001e848d1cef00 RDI: 00000000003d0900
Jan 19 19:23:22 myvmXX kernel: [163933.219982] RBP: ffffb5a580003e98
R08: 0000000000000000 R09: 666657f4d876bbc2
Jan 19 19:23:22 myvmXX kernel: [163933.219983] R10: ffff8d1ee8119618
R11: ffff8d1effa2ffb8 R12: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219983] R13: 0000000000000000
R14: 0000000000000009 R15: 6666232f185ebbc2
Jan 19 19:23:22 myvmXX kernel: [163933.219984] FS:
0000000000000000(0000) GS:ffff8d1effa00000(0000) knlGS:0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219984] CS: 0010 DS: 0000 ES:
0000 CR0: 0000000080050033
Jan 19 19:23:22 myvmXX kernel: [163933.219984] CR2: 000055a093634da0
CR3: 00000001280d6000 CR4: 00000000003406f0
Jan 19 19:23:22 myvmXX kernel: [163933.219985] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.219985] <IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.219985] ? ttwu_do_activate+0x5b/0x70
Jan 19 19:23:22 myvmXX kernel: [163933.219985] update_wall_time+0x10/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219986]
tick_do_update_jiffies64.part.0+0x88/0xd0
Jan 19 19:23:22 myvmXX kernel: [163933.219986] tick_sched_do_timer+0x58/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.219986] tick_sched_timer+0x2d/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.219987]
__hrtimer_run_queues+0xf7/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.219987] ?
tick_sched_do_timer+0x60/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.219987] hrtimer_interrupt+0x109/0x220
Jan 19 19:23:22 myvmXX kernel: [163933.219988]
smp_apic_timer_interrupt+0x71/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.219988] apic_timer_interrupt+0xf/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219988] </IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.219988] RIP:
0010:native_safe_halt+0xe/0x10
Jan 19 19:23:22 myvmXX kernel: [163933.219989] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Jan 19 19:23:22 myvmXX kernel: [163933.219990] RSP:
0018:ffffffff8be03e18 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Jan 19 19:23:22 myvmXX kernel: [163933.219990] RAX: ffffffff8aef7a20
RBX: 0000000000000000 RCX: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.219991] RDX: 000000000bc48666
RSI: ffffffff8be03dd8 RDI: 00009519477dc7a5
Jan 19 19:23:22 myvmXX kernel: [163933.219991] RBP: ffffffff8be03e38
R08: 0000000000000001 R09: 0000000000000002
Jan 19 19:23:22 myvmXX kernel: [163933.219992] R10: 0000000000000000
R11: 0000000000000001 R12: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219992] R13: ffffffff8be13780
R14: 0000000000000000 R15: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219992] ?
__cpuidle_text_start+0x8/0x8
Jan 19 19:23:22 myvmXX kernel: [163933.219993] ?
tick_nohz_idle_stop_tick+0x164/0x290
Jan 19 19:23:22 myvmXX kernel: [163933.219993] ? default_idle+0x20/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.219993] arch_cpu_idle+0x15/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219994] default_idle_call+0x23/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.219994] do_idle+0x1fb/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.219994] cpu_startup_entry+0x20/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.219994] rest_init+0xae/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.219995] arch_call_rest_init+0xe/0x1b
Jan 19 19:23:22 myvmXX kernel: [163933.219995] start_kernel+0x52f/0x550
Jan 19 19:23:22 myvmXX kernel: [163933.219995]
x86_64_start_reservations+0x24/0x26
Jan 19 19:23:22 myvmXX kernel: [163933.219996] x86_64_start_kernel+0x8f/0x93
Jan 19 19:23:22 myvmXX kernel: [163933.219996]
secondary_startup_64+0xa4/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.220000] NMI backtrace for cpu 1
Jan 19 19:23:22 myvmXX kernel: [163933.220002] CPU: 1 PID: 0 Comm:
swapper/1 Not tainted 5.4.0-137-generic #154-Ubuntu
Jan 19 19:23:22 myvmXX kernel: [163933.220003] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Jan 19 19:23:22 myvmXX kernel: [163933.220003] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.220004] <IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.220006] dump_stack+0x6d/0x8b
Jan 19 19:23:22 myvmXX kernel: [163933.220008] ?
lapic_can_unplug_cpu+0x80/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.220010]
nmi_cpu_backtrace.cold+0x14/0x53
Jan 19 19:23:22 myvmXX kernel: [163933.220013]
nmi_trigger_cpumask_backtrace+0xe8/0xf0
Jan 19 19:23:22 myvmXX kernel: [163933.220015]
arch_trigger_cpumask_backtrace+0x19/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220016] rcu_dump_cpu_stacks+0x99/0xcb
Jan 19 19:23:22 myvmXX kernel: [163933.220018]
rcu_sched_clock_irq.cold+0x1b0/0x39c
Jan 19 19:23:22 myvmXX kernel: [163933.220020]
update_process_times+0x2c/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220021] tick_sched_handle+0x29/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220022] tick_sched_timer+0x3d/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.220024]
__hrtimer_run_queues+0xf7/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.220026] ?
tick_sched_do_timer+0x60/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220027] hrtimer_interrupt+0x109/0x220
Jan 19 19:23:22 myvmXX kernel: [163933.220029]
smp_apic_timer_interrupt+0x71/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.220030] apic_timer_interrupt+0xf/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220031] </IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.220033] RIP:
0010:native_safe_halt+0xe/0x10
Jan 19 19:23:22 myvmXX kernel: [163933.220035] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Jan 19 19:23:22 myvmXX kernel: [163933.220036] RSP:
0018:ffffb5a58007be70 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Jan 19 19:23:22 myvmXX kernel: [163933.220037] RAX: ffffffff8aef7a20
RBX: 0000000000000001 RCX: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.220038] RDX: 000000000d1b7786
RSI: ffffb5a58007be30 RDI: 000095194740bea5
Jan 19 19:23:22 myvmXX kernel: [163933.220039] RBP: ffffb5a58007be90
R08: 0000000000000001 R09: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.220040] R10: ffff8d1effa5c848
R11: 0000000000000000 R12: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.220040] R13: ffff8d1eff25af00
R14: 0000000000000000 R15: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.220042] ?
__cpuidle_text_start+0x8/0x8
Jan 19 19:23:22 myvmXX kernel: [163933.220044] ? default_idle+0x20/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.220046] arch_cpu_idle+0x15/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220047] default_idle_call+0x23/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.220049] do_idle+0x1fb/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.220051] cpu_startup_entry+0x20/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.220053] start_secondary+0x167/0x1c0
Jan 19 19:23:22 myvmXX kernel: [163933.220054]
secondary_startup_64+0xa4/0xb0
Jan 19 19:24:34 myvmXX systemd[1]: systemd-udevd.service: Watchdog
timeout (limit 3min)!
Jan 19 19:24:34 myvmXX systemd[1]: systemd-udevd.service: Killing
process 421 (systemd-udevd) with signal SIGABRT.
Nov 15 18:38:49 myvmXX kernel: [164024.594313] rcu: INFO: rcu_sched
self-detected stall on CPU
Nov 15 18:38:49 myvmXX kernel: [164024.594352] rcu: 0-...!: (2
ticks this GP) idle=66a/0/0x1 softirq=4017386/4017386 fqs=0
Nov 15 18:38:49 myvmXX kernel: [164024.594380] (t=1844682531753
jiffies g=9505317 q=675)
Nov 15 18:38:49 myvmXX kernel: [164024.594382] rcu: rcu_sched kthread
starved for 1844682531753 jiffies! g9505317 f0x0 RCU_GP_WAIT_FQS(5)
->state=0x200 ->cpu=0
Nov 15 18:38:49 myvmXX kernel: [164024.594413] rcu: RCU grace-period
kthread stack dump:
Nov 15 18:38:49 myvmXX kernel: [164024.594432] rcu_sched R 0
10 2 0x80004000
Nov 15 18:38:49 myvmXX kernel: [164024.594435] Call Trace:
Nov 15 18:38:49 myvmXX kernel: [164024.594445] __schedule+0x2e3/0x740
Nov 15 18:38:49 myvmXX kernel: [164024.594447] schedule+0x42/0xb0
Nov 15 18:38:49 myvmXX kernel: [164024.594449] schedule_timeout+0x8a/0x160
Nov 15 18:38:49 myvmXX kernel: [164024.594453] ?
rcu_accelerate_cbs+0x28/0x190
Nov 15 18:38:49 myvmXX kernel: [164024.594456] ?
__next_timer_interrupt+0xe0/0xe0
Nov 15 18:38:49 myvmXX kernel: [164024.594458] rcu_gp_kthread+0x48d/0x9a0
Nov 15 18:38:49 myvmXX kernel: [164024.594460] kthread+0x104/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594462] ? kfree_call_rcu+0x20/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594463] ? kthread_park+0x90/0x90
Nov 15 18:38:49 myvmXX kernel: [164024.594464] ret_from_fork+0x35/0x40
Nov 15 18:38:49 myvmXX kernel: [164024.594467] NMI backtrace for cpu 0
Nov 15 18:38:49 myvmXX kernel: [164024.594470] CPU: 0 PID: 0 Comm:
swapper/0 Not tainted 5.4.0-137-generic #154-Ubuntu
Nov 15 18:38:49 myvmXX kernel: [164024.594471] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Nov 15 18:38:49 myvmXX kernel: [164024.594472] Call Trace:
Nov 15 18:38:49 myvmXX kernel: [164024.594473] <IRQ>
Nov 15 18:38:49 myvmXX kernel: [164024.594476] dump_stack+0x6d/0x8b
Nov 15 18:38:49 myvmXX kernel: [164024.594479] ?
lapic_can_unplug_cpu+0x80/0x80
Nov 15 18:38:49 myvmXX kernel: [164024.594480]
nmi_cpu_backtrace.cold+0x14/0x53
Nov 15 18:38:49 myvmXX kernel: [164024.594484]
nmi_trigger_cpumask_backtrace+0xe8/0xf0
Nov 15 18:38:49 myvmXX kernel: [164024.594485]
arch_trigger_cpumask_backtrace+0x19/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594488] rcu_dump_cpu_stacks+0x99/0xcb
Nov 15 18:38:49 myvmXX kernel: [164024.594489]
rcu_sched_clock_irq.cold+0x1b0/0x39c
Nov 15 18:38:49 myvmXX kernel: [164024.594491]
update_process_times+0x2c/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594494] tick_sched_handle+0x29/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594495] tick_sched_timer+0x3d/0x80
Nov 15 18:38:49 myvmXX kernel: [164024.594497]
__hrtimer_run_queues+0xf7/0x270
Nov 15 18:38:49 myvmXX kernel: [164024.594498] ?
tick_sched_do_timer+0x60/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594500] hrtimer_interrupt+0x109/0x220
Nov 15 18:38:49 myvmXX kernel: [164024.594503]
smp_apic_timer_interrupt+0x71/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594504] apic_timer_interrupt+0xf/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594505] </IRQ>
Nov 15 18:38:49 myvmXX kernel: [164024.594507] RIP:
0010:native_safe_halt+0xe/0x10
Nov 15 18:38:49 myvmXX kernel: [164024.594510] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Nov 15 18:38:49 myvmXX kernel: [164024.594511] RSP:
0018:ffffffff8be03e18 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Nov 15 18:38:49 myvmXX kernel: [164024.594513] RAX: ffffffff8aef7a20
RBX: 0000000000000000 RCX: 0000000000000001
Nov 15 18:38:49 myvmXX kernel: [164024.594513] RDX: 000000000bc48666
RSI: ffffffff8be03dd8 RDI: 00009519477dc7a5
Nov 15 18:38:49 myvmXX kernel: [164024.594514] RBP: ffffffff8be03e38
R08: 0000000000000001 R09: 0000000000000002
Nov 15 18:38:49 myvmXX kernel: [164024.594515] R10: 0000000000000000
R11: 0000000000000001 R12: 0000000000000000
Nov 15 18:38:49 myvmXX kernel: [164024.594515] R13: ffffffff8be13780
R14: 0000000000000000 R15: 0000000000000000
Nov 15 18:38:49 myvmXX kernel: [164024.594517] ?
__cpuidle_text_start+0x8/0x8
Nov 15 18:38:49 myvmXX kernel: [164024.594519] ?
tick_nohz_idle_stop_tick+0x164/0x290
Nov 15 18:38:49 myvmXX kernel: [164024.594521] ? default_idle+0x20/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594524] arch_cpu_idle+0x15/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594525] default_idle_call+0x23/0x30
Nov 15 18:38:49 myvmXX kernel: [164024.594528] do_idle+0x1fb/0x270
Nov 15 18:38:49 myvmXX kernel: [164024.594530] cpu_startup_entry+0x20/0x30
Nov 15 18:38:49 myvmXX kernel: [164024.594532] rest_init+0xae/0xb0
Nov 15 18:38:49 myvmXX kernel: [164024.594536] arch_call_rest_init+0xe/0x1b
Nov 15 18:38:49 myvmXX kernel: [164024.594537] start_kernel+0x52f/0x550
Nov 15 18:38:49 myvmXX kernel: [164024.594539]
x86_64_start_reservations+0x24/0x26
Nov 15 18:38:49 myvmXX kernel: [164024.594541] x86_64_start_kernel+0x8f/0x93
Nov 15 18:38:49 myvmXX kernel: [164024.594544]
secondary_startup_64+0xa4/0xb0
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Jan
19 19:28:51 myvmXX systemd-sysctl[413]: Not setting
net/ipv4/conf/all/promote_secondaries (explicit setting exists).
--
Databay AG Logo
*Ralf Schenk
*
fon: +49 2405 40837-0
mail: rs(a)databay.de
web: www.databay.de <https://www.databay.de>
Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen
Sitz/Amtsgericht Aachen • HRB: 8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Dr. Jan Scholzen
Datenschutzhinweise für Kunden: Hier nachlesen
<https://www.databay.de/datenschutzhinweise-fuer-kunden>
1 year, 10 months
unsynced after remove brick
by Dominique D
Hello,
Yesterday I had to remove the brick of my first server (HCI with 3 servers) for maintenance and recover hard disks.
3 servers with 4 disks per server in raid5. 1 brick per server
i did :
gluster volume remove-brick data replica 2 ovnode1s.telecom.lan:/gluster_bricks/datassd/datassd force
After deleting the brick, I had 8 unsynced entries present and this morning I have 6.
What should I do to resolve my unsynced ?
[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2431
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2379
Self-heal Daemon on localhost N/A N/A Y 2442
Self-heal Daemon on ovnode3s.telecom.lan N/A N/A Y 2390
Task Status of Volume datassd
------------------------------------------------------------------------------
[root@ovnode2 ~]# gluster volume heal datassd info
Brick ovnode2s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Brick ovnode3s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Thank you
1 year, 10 months